"Time" is "Time."
Any "difference in time" is therefore going to be "merely a coincidence." "A trick of chance." The unpredictable amount of time that is consumed by the kernel-to-user transition on your particular hardware. In other words, "not a valuable number."
Also remember that, in order to get the time "in userspace," it's possible that a transition to kernel-mode and back could occur (and if it does occur, "for no good reason"). In other words, your result would be highly "tainted" no matter how you got it.
My recommendation: "simply measure the time-interval from user-space only." Yes, the result you will obtain is "fuzzed" by the fact that you're user-space, but then again, so is everything else that you do in user-space! (Indeed, the result could be "fuzzed" quite considerably, given the reality that you are not the only user-land process that might be running on this particular system at this particular time.) Instead of trying to measure that "fuzz," given that every measure of such things that you may deign to take will undoubtedly be different, simply "bill it off to overhead."
If what you are interested in measuring is "the time as perceived by a user-land program, then measure that time and nothing else. Now, measure it (say...) 10,000 times and take the average. The result that you will thus obtain is a "boots-on-the-ground, pragmatic, usable answer that you can run with."