I'm glad that you find the code useful.
There is no problem if two threads happen to build the same cached fragment at the same time because the generated content is (or should be) identical. One of the threads will put the page fragment in the cache and the other thread will override it with the same content. This would happen rarely, and when it happens, the performance loss is minimal and there are no side effects. Once it happens for a page fragment, it won't happen again, assuming that you don't remove it from the cache.
On the other hand, if you use a cache lock, the JVM will spend a lot of time with the synchronized accesses. Think of a real app with hundreds of JSP pages, each of them running in tens of threads at the same time. If most of the content is cached to gain performance, you want the caching code to do as few things as possible. In a production environment, you might even want to take out the lines of code that are useful only for debugging the application.
I agree that synchronization is often overlooked in Web applications, but in the case of this article the access to the cache is left unsynchronized deliberately. Unnecessary synchronization is notorious for slowing down Java applications.