Monday, October 25, 2010

Is the "ORDER BY" in HQL and EJB-QL based on lexicography or dictionary?

This topic is inspired by Dominik Dunz's comment on my Hibernate tuning article "Revving up Your Hibernate Engine" on InfoQ.

In Java, you sort strings lexicographically (based on the underlying character's encoding values) using class String's compareTo() method.
You can also sort strings based on a locale's dictionary using class Collator's compare() method.

So now you should ask which sorting the "order by" I wrote in HQL or EJB-QL supports?
Currently there is no any QL syntax for you specify either a lexicographic or dictionary order. So both HQL and EJB-QL just literally pass the "order by" clause to the back-end database. It is your database session that decides the sorting.

In case of Oracle, it also supports sorting lexicographically (binary in Oracle's term) or based on dictionary(linguistic in Oracle's term).
Your Oracle session decides the sorting. Specifically if the session's NLS_COMP is "binary" it will sort lexicographically(based on the string's underlying encoding values).
If the session's NLS_COMP is "linguistic", it will sort based on the dictionary of the locale that you specified in NLS_SORT.

If you use Oracle's JDBC thin driver in an application server, the application server's JVM decides the values of NLS_COMP and NLS_SORT.

You can always do sorting in your application tier based on your business logic instead of relying on your database. But your application tier sort probably will be slower than your DB sorting.
However there are many complications if you want to use your back-end database sorting to implement your business logic sorting.
  1. Your database may only support lexicographical sorting;
  2. Even lexicographical sorting is much simpler than dictionary sorting, your database session's character encoding may not be Java string's UNICODE. However it may not be a big deal to change your DB's charactor encoding to Java string's UNICODE or be a subset of Java string's UNICODE.
    You also need to make sure that your DB's lexicographical sorting is the same as your Java's. In case of Oracle, it basically has the same lexicographical sorting as Java String's compareTo() method.
  3. Java's linguistic sorting may not be the same as your DB's. You need to carefully exam documents from both Java and your DB.
    You can find Oracle's linguistic sorting logic from this link.
  4. It is becoming more complicated if you have a there-tier architecture where the front-end UI (either Swing or a browser) decides the sorting logic because the same database session in the back-end can be shared by different front-end user sessions.
    You have to change your DB session's sorting whenever your fron-end UI has changed.

Thursday, October 21, 2010

Commercial-Off-The-Shell Enterprise Real Time Computing (COTS ERTC) -- Part 3: Operating System (OS) Requirements

Since OS is between hardware and Java program language, it provides your COTS ERTC applications with more enabling functions. The latency and jitter requirements to OS are much higher than to hardware.

Again such general purpose OS as Windows NT, Linux and Solaris are designed primarily for high throughput at the cost of poor latency. Generally there are two trends for OS to support COTS ERTC.
One is to stick with the general purpose OS with the help of fine tunings through which SRT is usually achievable. Those OS's stick with the high-throughput design goal. Windows NT belongs to this category () (Although there are indeed quite many commercial efforts to extending NT for RTC functions, they are appropriate and hence doesn't belong to COTS ERTC).

The other trend is to add RTC function to the OS so that both throughput and RTC workloads can be handled and even HRT can also be implemented at the cost of slight lower throughput.
More and more RTC features have been added to the Linux mainline kernel since version 2.6 through a RT patch (We will hereafter use "Stock Linux" for general purpose Linux, "Linux RT" for Linux with the RT patch). Red Hat Enterprise MGR and SUSE Linux Enterprise Real Time (SLERT) are representative.
Oracle / Sun also has significant RTC features in its Solaris itself for long.
Actually both Linux and Solaris are POSIX compliant including the real-time and thread extensions. So they both belong to this category (There are also many other efforts to extending stock Linux for RTC functions such as RTLinux, RTAI. They are more or less like a dual-kernel approach. Because they are appropriate and never got into the mainline Linux kernel, they don't belong to COTS ERTC) .

1. Preemptable Kernel
Multi-threaded programs are known to programmers for long. However OS fully preempting user-space threads doesn't necessarily mean its kernel is also fully preemptable. Actually different OS's provide different degrees of preemption. Obviously low-degree preemption means high latency and jitter.

Figure 2 in part 2 shows the OS scheduler takes "interval response time" to preempt an interrupted thread(usually a low-priority thread) with a preempting thread (usually a high-priority thread). The shorter the interval response time, the more preemptable.

Whenever the processor receives an interrupt, it calls an interrupt handler, a.k.a. an interrupt service routine (ISR) to service the interrupt.
Preemption latency is the time needed for the scheduler to determine which thread should run and the time for the new thread to be dispatched.
Context switch is the kernel saves the state of the interrupted thread or process, loads the context for the preempting thread or process, and begins execution.
I will focus on ISR and preemption latency because different OS's employ different strategies.

1.1 ISR
On Linux RT and Windows NT, ISR is divided into two parts: the First-Level Interrupt Handler (FLIH) (or Upper Half on Linux RT) and the Second-Level Interrupt Handler (SLIH) (or Lower Half or Bottom Half on Linux RT; Deferred Procedure Call (DPC) on Windows NT).
FLIH quickly services the interrupt or records platform-specific critical information which is only available at the time of the interrupt, and schedules the execution of SLIH for further long-lived interrupt handling.
Because FLIH typically masks interrupts at the same or lower level until it completes, it affects preemption and causes jitter. So to reduce jitter and to reduce the potential for losing data from masked interrupts, OS should minimize the execution time of FLIH, moving as much as possible to SLIH.

SLIH asynchronously completes long interrupt processing tasks in a kernel thread scheduled by FLIH. Because it is implemented in a thread, the user can assign a priority to it and the scheduler can dispatch it along with other threads.
For example, if your RT application thread has a higher priority than SLIH, only FLIH interrupts your RT application thread and SLIH will not run until your RT application thread has done.
Because the ISR in Figure 2 only effectively represents FLIH, the whole interval response time was cut short.

On Solaris ISR is a whole and implemented in a kernel thread. Because such a thread has higher priority than all non-ISR threads including RT ones, it makes kernel less preemptable and causes much larger jitter to your RT application threads than the previous approach.

Windows NT has additional jitter caused by DPCs being scheduled into a FIFO queue. So if your high-priority DPC is put behind a low-priority one, the high-priority DPC can't be executed until its prior low-priority one is done.

1.2 Preemption Latency
Traditionally when a low-priority thread calls a kernel function through a system call, it can't be preempted even by a high-priority thread until the system call returns. This is again primarily due to high-throughput consideration (The more interrupts, the more overhead and the lower throughput).
This is the situation for stock Linux Kernel 2.5 or prior that has many lengthy kernel code paths protected by spin locks or even  by so called Bigger Kernel Lock (BKL is basically a kernel-wide or global lock).

Changing BKL to localized spin locks is the first step toward preemption. But a spin lock is typically not preemptable because if it is preempted, the preempting thread can also try to spin-lock the same resource, which causes deadlock.

To make kernel more preemptable is to break down a lengthy code path into a number of shorter code paths, between which preemption points are created which is stock Linux kernel 2.6 or later has enabled. SRT can be achieved at best in this case.

The extreme approach is to convert all spin locks to sleepy mutexes so that your kernel code is preemptable at any point which is what Linux RT has enabled. HRT needs this capability.

However because Linux should be able to handle both throughput and RTC workloads, a better and practical approach may be to use adaptable locks which are spin locks for short-running code paths and are mutexes for long-running code paths based on statistics.
Actually SLERT 11 provides such adaptable locks.

Windows NT has been fully preemptable from the very beginning.

2. Priority-Based Scheduling
The scheduler in a general purpose OS is designed to maximize overall throughput and to assure fairness for all time-share threads / processes. To provide equitable behavior and ensure all time-share threads / processes can eventually be executed, the scheduler adjusts thread priorities dynamically so that the priorities for resource-intensive threads are lowered automatically while the priorities for IO-intensive threads are boosted automatically. In other words, even you initially assigned a high priority level to a time-share thread, it will not starve other threads.

This is not desirable for RT threads which always need to run before any low-priority thread in order to minimize latency at the cost of lower throughput of other threads.
Besides the traditional time-slice and dynamic-priority threads, Windows NT, Solaris and stock Linux all provide RT threads which have fixed priorities and always run before TS and other low-priority threads.
In other words, the scheduler will not adjust those RT threads' priority and they will not be preempted by TS or other lower-priority threads unless they wait, sleep or yield.

Both stock Linux and Solaris provide two scheduling policies for RT thread.  One is Round-Robin which is similar to the TS thread scheduling; the other is FIFO where the prior RT thread runs to complete before the late RT thread with the same priority level.

The priority level range for RT threads can't to be too small. Otherwise your RT thread scheduling flexibility will be severely constrained.
Windows NT includes 32 priority levels of which 16 are reserved for the operating system and
real-time processes. This range is really too tight.

Stock Linux RT priority class provides 99 fixed priority levels ranging from 1 to 99 (0 is left for non-RT threads).
The following RT thread priority mapping table was extracted from Red Hat Enterprise MRG tuning guide:
Priority Threads Description
1 Low priority kernel threads Priority 1 is usually reserved for those tasks that need to be just above SCHED_OTHER
2 - 69 Available for use Range used for typical application priorities
70 - 79 Soft IRQs
80 NFS RPC, Locking and Authentication threads for NFS
81 - 89 Hard IRQs Dedicated interrupt processing threads for each IRQ in the system
90 - 98 Available for use For use only by very high priority application threads
99 Watchdogs and migration System threads that must run at the highest priority

Although an important feature for RT thread scheduling is to schedule your RT application threads to be higher than kernel threads, it can possibly cause the system to hang and other unpredictable behavior such as blocked network trafic and blocked swapping if crucial kernel threads are prevented from running as needed (now you should have more feeling how your RT thread is scheduled at the cost of lower overall system throughput).
So if your RT application thread is higher than kernel threads, make sure they don't runaway and you also should allocate some time for kernel threads.
For example, your RT thread doesn't run too long or it runs periodically based on a RT timer or it is driven by external periodic RT events or you have multiple CPUs at least one of which is dedicated to kernel threads.

3. Priority Inheritance
Priority Inversion occurs when a high-priority thread blocks on a resource that is held by a low-priority thread, and a medium-priority thread preempts the low-priority thread and runs before the high-priority thread, which causes jitter for the high-priority thread.
Priority inheritance fixes the priority inversion problem by temporarily enabling the low-priority thread to inherit the priority of high-priority thread so that the formerly low-priority thread can continue to run to finish without being preempted by the medium-priority thread. The inheriting thread restores its original low priority when it has released the lock.

Both Solaris and Linux RT support priority inheritance. Unfortunately Windows NT doesn't support it.
If possible, try to avoid a high-priority thread from sharing the same resource as a low-priority thread. Obviously this appears to be more important to Windows NT.

4. High-Resolution Timers
Section 1.5 in part 2 mentioned the need for high-resolution timers which are backed by high-resolution clocks on most modern hardware. The OS just takes advantage of hardware timers by providing you with different system calls for high-resolution timers besides the traditional system call for regular timers.

For example both Solaris and Linux support system call "timer_create" and "timer_settime" with clock type "CLOCK_HIGHRES" on Solaris or CLOCK_REALTIME / CLOCK_MONOTONIC on Linux (you need to enable a kernel parameter "CONFIG_HIGH_RES_TIMERS" available on 2.6.21 and later on X86) to access high-resolution timers.

5. CPU Shielding
Windows NT, Solaris and stock Linux all support CPU shielding which allows you to bind different processors / cores to different interrupts and threads including both kernel and user space ones. The bound CPU is shielded from unbound interrupts and threads.

For example, you bind your high-priority application thread to one CPU while other CPUs take care of other threads including kernel thread, and interrupts including NMI and SMI so that you are confident that your high-priority application thread has low latency and is very predicable.
This means more to Solaris because its ISR is implemented in a thread whose priority is higher than any non-ISR thread including your RT application thread.

6. Others
6.1 Memory Pinning
Windows NT, Solaris and stock Linux all allow you to pin your high-priority thread to physical memory to avoid being swapped to high-latency disks.
Due to the mechanism in disks, disk IO access latency is in milli-seconds, which is at least one order of magnitude slower than memory access. So OS swapping is a major contributor to latency.

6.2 Early Binding
The late binding of dynamic libraries in OS can induce unpredictable jitter to your RT application thread. To avoid jitter, Both Linux and Solaris provides for early binding of dynamic libraries through an environment vairable LD_BIND_NOW.
Windows NT doesn't seems to support such early binding. To counter-attach this, you can warm up (it is hereafter either the program's start-up phase or an initialization phase before the time-critical execution) your application before asking it to execute time-critical code.

6.3 Locking Improvement
Stock Linux use so called "Futex" to avoid system calls for un-contended locks. Solaris uses a similar mechanism called "adaptive lock".

7 COTS ERTC scenarios with OS
Even an OS provides both through-put and RTC functions, the RTC functions are at the cost of slight throughput degradation. Actually many observations show only a minority of workloads truly need tight HRT. Accordingly users should always first try OS without the RTC functions enabled.

For example on Windows NT and stock Linux, if your low latency requirements can be met through such tunings as using RT threads, CPU shielding, memory pinning, priority inversion avoidance, HR timers, application warm-up, and early bind and preemption kernel configuration on stock Linux, don't try Linux RT. Actually many SRT can be achieved using Windows NT or stock Linux

If you need high predictability or tight HRT, you have to use Linux RT such as MRG and SLERT, or Solaris.

Thursday, October 14, 2010

How to use 2 or more data sources in Hibernate along with Spring's Declarative Transaction Management?

You may quickly response "just use Spirng's JtaTransactionManager".
But wait. Before deciding to use JTA, you should make sure that local transaction really doesn't meet your requirement because JTA requires many more resources and is much slower than local transactions.
Even you have 2 or more data sources, you don't need to use use JTA in the following cases:
  • No business method has to access more than 1 data source;
  • Event your business method has to access more than 1 data source, you can still use a technique similar to “Last Resource Commit Optimization" with local transactions if you can tolerate occasional data inconsistency. 
Here is the example in my "Revving up Your Hibernate Engine": 

Our application has several service layer methods which only deal with database “A” in most instances; however occasionally they also retrieve read-only data from database “B”. Because database “B” only provides read-only data, we still use local transactions on both databases for those methods.
The service layer does have one method involving data changes on both databases. Here is the pseudo-code:
//Make sure a local transaction on database A exists
@Transactional (readOnly=false, propagation=Propagation.REQUIRED)
public void saveIsoBids() {
  //it participates in the above annotated local transaction
  insertBidsInDatabaseA();
  //it runs in its own local transaction on database B
  insertBidRequestsInDatabaseB(); //must be the last operation

Because insertBidRequestsInDatabaseB() is the last operation in saveIsoBids (), only the following scenario can cause data inconsistency:
The local transaction on database “A” fails to commit when the execution returns from saveIsoBids ().
However even if you use JTA for saveIsoBids (), you still get data inconsistency when the second commit phase fails in the two phase commit (2PC) process. So if you can deal with the above data inconsistency and really don’t want JTA complexities for just one or a few methods, you should use local transactions. 

Now suppose you will use local transaction, i.e.Spring's HibernateTransactionManager. In your context XML, you define the needed transaction manager bean:
  <tx:annotation-driven transaction-manager="txManager1"/>
  <bean id="txManager1"
 class="org.springframework.orm.hibernate3.HibernateTransactionManager">
    <property name="sessionFactory" ref="sessionFactory1"/>
  </bean>

  <bean id="sessionFactory1"
 class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
    <property name="dataSource" ref="dataSource1"/>
  </bean>

The above XML just configured one data source and its related Hibernate session factory and transaction manager.
The above annotation-driven element only allows you to specify one transaction manager (it should do this way otherwise which transaction manager will the annotated service method use?) and it is global.
So what happens if you specify another transaction manager in a different context XML:
  <tx:annotation-driven transaction-manager="txManager2"/>
  <bean id="txManager1"

Based on my testing, the result is unexpected.

The solution is you can only use the transaction manager along with Spring's declarative transaction; other transaction managers must use Spring's XML configuration:

<tx:advice id="txAdvice" transaction-manager="txManager2">
  <tx:attributes>
    <!-- all other methods starting with 'get' are read-only -->
    <tx:method name="get*" read-only="true"/>

    <!-- other methods use the default transaction settings (see below) -->
    <tx:method name="*"/>
  </tx:attributes>
</tx:advice>

<bean id="txManager2"
 class="org.springframework.orm.hibernate3.HibernateTransactionManager">
  <property name="sessionFactory" ref="sessionFactory2"/>
</bean>

<aop:config>
  <aop:pointcut id="serviceOperation" expression="...ignored"/>
  <aop:advisor advice-ref="txAdvice" pointcut-ref="serviceOperation"/>
</aop:config>

Asynchronous (non-blocking) Execution in JDBC, Hibernate or Spring?

There is no so called asynchronous execution support in JDBC mainly because you want to wait for the result of your DML or DDL most of the time or because there is too much complexity involved between the back-end database and the front end JDBC driver. 
Some database vendors do provide such support in their native drives. For example Oracle supports non-blocking calls in its native OCI driver. Unfortunately it is based on polling instead of callback or interrupt.
Neither Hibernate or Spring supports this feature.

But sometimes you do need such a feature. For example some business logic is still implemented using legacy Oracle PL/SQL stored procedures and they run pretty long. The front-end UI doesn't want to wait for its finish and it just needs to check the running result later in a database logging table into which the store procedure will write the execution status.
In other cases your front-end application really cares about low latency and doesn't care too much about how individual DML is executed. So you just fire a DML into the database and forget the running status.

Nothing can stop you from making asynchronous DB calls using multi-threading in your application. (Actually even Oracle recommends to use multi-thread instead of polling OCI for efficiency).
However you must think about how to handle transaction and connection (or Hibernate Session) in threads.
Before continuing, let's assume we are only handling local transaction instead of JTA.

1. JDBC
It is straightforward. You just create another thread (DB thread hereafter) from the calling thread to make the actual JDBC call.
If such a call is frequent, you call use ThreadPoolExecutor to reduce thread's creation and destroy overhead.

2. Hibernate
You usually use session context policy "thread" for Hibernate to automatically handle your session and transaction.
With this policy, you get one session and transaction per thread. When you commit the transaction, Hibernate automatically closes the session.
Again you need to create a DB thread for the actual stored procedure call.

Some developer may be wondering whether the new DB thread inherits its parent calling thread's session and transaction.
This is an important question. First of all, you usually don't want to share the same transaction between the calling thread and its spawned DB thread because you want to return immediately from the calling thread and if both threads share the same session and transaction, the calling thread can't commit the transaction and long running transaction should be avoided.
Secondly Hibernate's "thread" policy doesn't support such inheritance because if you look at Hibernate's corresponding ThreadLocalSessionContext, it is using ThreadLocal class instead of InheritableThreadLocal.

Here is a sample code in the DB thread:
// Non-managed environment and "thread" policy is in place
// gets a session first
Session sess = factory.getCurrentSession();
Transaction tx = null;
try {
  tx = sess.beginTransaction();

  // call the long running DB stored procedure

  //Hibernate automatically closes the session 
  tx.commit();
}
catch (RuntimeException e) {
  if (tx != null) tx.rollback();
  throw e;
}

3.Spring's Declarative Transaction

Let's suppose your stored procedure call is included in method:
  @Transactional(readOnly=false)
  public void callDBStoredProcedure();

The calling thread has the following method to call the above method asynchronously using Spring's TaskExecutor:
  @Transactional(readOnly=false)
  public void asynchCallDBStoredProcedure() {
        //creates a DB thread pool
        this.taskExecutor.execute(new Runnable() {
            @Override
            public void run() {
                //call callDBStoredProcedure()
            }
        });
  }

You usually configure Spring's HibernateTransactionManager and the default proxy mode (aspectj is another mode) for declarative transactions. This class binds a transaction and a Hibernate session to each thread and doesn't Inheritance either just like Hibernate's "thread" policy.

Where you put the above method callDBStoredProcedure() makes a huge difference.
If you put the method in the same class as the calling thread, the declared transaction for callDBStoredProcedure() doesn't take place because in the proxy mode only external or remote method calls coming in through the AOP proxy (an object created by the AOP framework in order to implement the transaction aspect. This object supports your calling thread's class by composing an instance of your calling thread class) will be intercepted. This meas that "self-invocation", i.e. a method within the target object (the composed instance of your calling thread class in the AOP proxy) calling some other method of the target object, won't lead to an actual transaction at runtime even if the invoked method is marked with @Transactional!

So you must put callDBStoredProcedure() in a different class as a Spring's bean so that the DB thread in method asynchCallDBStoredProcedure() can load that bean's AOP proxy and call callDBStoredProcedure() through that proxy. 

Wednesday, October 13, 2010

Protect Hibernate Collection

Suppose you have a unidirectional one-to-many association between department and employee.The department meta configuration uses eager loading for its employees; and the cascade is "all, delete-orphan".

A developer usually makes the following set association in the department pojo:

public class Department {
  private Set employees;

  public SetgetEmployees() {
      if (this.employees == null) {
        this.employees = new HashSet();
      }
      return this.employees;
  }

  public void setEmployees(Set employees) {
    this.employees = employees;
  }

} //end of class Department

The problem is with the method setEmployees(). Suppose you loaded a department object along with its 10 employees to your front-end UI, then you removed 2 employees - employee1 and employee2, form the set and added a new one employee11.
If you pass your new set of employees in a new Java Set instance and call setEmployees(), you will not get what you want.
This is the result: employee1 and employee2 are not deleted from your back-end DB as you expect. But the new employee11 was indeed inserted into the back-end DB.

This is why. When Hibernate initializes the employees set, it replaces it with its own version of set "PersistentSet" for bookkeeping among other reasons.
So if you remove the 2 employees from Hibernate's set, it will remember your deleting action. Otherwise Hibernate simply doesn't know your want to delete anything.

This will further cause unique constraint problem if you later changed your mind and don't want to delete a previously removed employee from the set.
For example, you first removed employee1 from the set, then you rolled it back by creating a new employee instance that has the same identify value(suppose the identify property is SSN and your DB has a unique constraint on SSN).
You are allowed to put the new employee into the set because you have removed the original employee1 from the set. But when you try to save the set into the database, you will get a unique constraint problem because the original employee1 is not deleted from the DB.

So you should defensively change method setEmployees() to protected and add some helper method. So the Department class looks like:

public class Department {
  private Set employees;

  public SetgetEmployees() {
      if (this.employees == null) {
        this.employees = new HashSet();
      }
      return this.employees;
  }

  //leaves it to Hibernate
  protected void setEmployees(Set employees) {
    this.employees = employees;
  }
  public boolean addEmployee(Employee employee) {
    return getEmployees().add(employee);
  }
  public boolean removeEmployee(Employee employee) {
    return getEmployees().remove(employee);
  }

} //end of class Department



Lastly why could you still save the new employ11 even using a new Java Set instance. This is because the new employee11's ID value is either null or 0 that is different from the id's "unsaved-value".

Tuesday, October 12, 2010

Wake up, America!

The more I saw at work and home, the more I feel the US has been slipping into a lazy country and most people born here are resting on one's laurels.

I once worked at a client where a guy used to spend most of his time watching online videos He really made things worse when he so laughed upon some "exciting" moments in the videos that everybody walked pass his cubicle knew what was happening over there.
Eventually he was let go. The client is not that dull.

There is another working situation that I feel is the craziest environment I've ever experienced so far in my 10 years of working history.
My project brought several new graduates to assist the client with QA testing. All of us sit in a big cubicle. I admit these guys could get their assigned work done.
However when they had down time,  they just spent most of time watching online videos, chatting online or browsing internet. This may be ok for some project mangers.
But I really had a hard time when I found they chatted to each other almost every 5 minutes about such personal stories as car model, lunch, dinner, sports and movies etc. Sometimes they even had a very long leisure chatting in spite of the client being next to us.
These guys were leaving bad impression to the client and eventually damaging the company's reputation.

My company had 2 career paths. One was business analysts; another was coding developers. The above talkative guys all followed the business analyst career path. They should be talkative. But they should talk to the clients intelligently because they eventually would be the sales guys for the company. How could they be intelligent if they didn't take advantage of their downtime?

I also had a co-developer who used to do things much like the first guy I mentioned before.
While this survey shows most developers find hard to learn new technologies, they guy instead wasted most of his downtime watching games and talking.
If you are also in the IT industry, you know you will definitely be a loooooser if you can't find time to update your existing skills or learn new technologies.
I really felt quite dark for such guys' future!

One of my neighbors probably lost his job for almost 2 years. Not long ago the Congress extended unemployment insurance to an unprecedented 99 weeks.
Oh man, the US is encouraging you to lose your jobs and nourishing the lazies. Even I was born in a socialist mainland China, I have to tell you the US is much much human and considerate, and has a much better common wealth (of course).
But why couldn't those people make a resolution and labor themselves to learn some skills during the 99 weeks??

My wife successfully finished her Ph.D in 5 years and also conducted about 3 years post-doc researches. Looking at her school, most Ph. D are either Chinese or Indians.
What is more interesting is her adviser just can't find any American students and actually doesn't like American students at all because he thinks they don't work hard (The adviser is of Israeli origin and his wife is a typical American white).

You should not be surprised when you read the news that China just surpassed Japan to become the 2nd largest economy in the world and will probably overtake the US as early as 2010.

But it is still not too late and the US is still  holding quite a large advantage. But Americans, you need to wake up, forget about your past glories and work hard right now!

Thursday, October 7, 2010

my technical article "Revving up Your Hibernate Engine" published on InfoQ

After it has been reviewed for more than 2 months, this Hibernate tuning article co-authored by my colleague Stewart Clark and I was finally published.

What makes it different from other Hibernate tuning articles is (1)it covers most of Hibernate tuning skills some of which are so important and effective,however poor documented; (2)it also covers some relevant DB knowledge (the authors also came from a strong DB background); (3)it shows many real-world examples based on our project experience with our client.

I recall my last Java Memory Model article in Dzone was reviewed only one day before being published. Comparative, InfoQ is much more careful and details-oriented than Dzone. I like it.