While I can't comment exactly on the CPU/GPU comparison for your specific setup, I can say that the general rule of thumb in GPU computing is that operating on as much data as possible in a single call by vectorizing your code will provide the best performance. With that said, since you are already using the bsxfun function I would go a step further and use the following code to avoid the for loop and operate on all of your data in a single call.
Also, the timeit/gputimeit functions are the best choice for comparing CPU and GPU execution as they each provide an average time over multiple runs. Furthermore, gputimeit takes into account the fact that GPU operations perform asynchronously, while tic/toc does not.
gputimeit(@()bsxfun(@minus, x1GPU, x2GPU))
timeit(@()bsxfun(@minus, x1CPU, x2CPU))
I can confirm that when I run the code you provided, it takes about 4 seconds on the GPU and 1.5 seconds on the CPU. For the code that I have provided, gputimeit reports an average time of 0.0855 seconds on the GPU while timeit reports (again) an average of 1.5 seconds on the CPU.
The bottom line is that CPU for loops combined with GPU computing in the loop body generally does not provide the best performance. You should always try to replace code like this with a vectorized version if possible.