Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I need to fit cos^(x) (or any non-power or polynomial function) data for some lab data, but can't find an intuitive way to do so. What's the most convenient and user friendly software tool for this?
Many thanks!
Last edited by wagscat123; 11-09-2016 at 08:47 PM.
Like fitting it like a line? There's no function for cos^2(x) offered in Calc (I was so desperate and tried Excel and it couldn't do it either). LabPlot and the thing is general is proving to be taking longer to figure out than the rest of the task, so I'll just throw it in a graph at this point and compare it to a cos^2(x) function.
In row/column A1, put 0; A2 gets A1+1; A3 gets A2+1, etc.
B1 gets cos^2(A1), and so on down. Formulae are copied intelligently in spreadsheets so it's handy. Then graph Column B
I was just hoping for an overly simplistic solution, and kinda gave up when I saw it involved that much learning for such a small task.
But it was simple intensity measurements of vertically or horizontally polarized light versus the angle of the (polarizer) analyzer it went through if you were curious.
>>>There's no function for cos^2(x) offered in Calc (I was so desperate and tried Excel and it couldn't do it either)
Since cos^2(x) = (cos(x))^2 = cos(x)*cos(x) or power(cos(x), 2) what is the problem ?
Calc has all of the trigonometic functions and the power function for squaring a base to an exponent. I just tried both forms in Calc. You don't need a function for cos^2(x) directly as it can be derived. This makes spreadsheet interfaces simpler.
>>>There's no function for cos^2(x) offered in Calc (I was so desperate and tried Excel and it couldn't do it either)
Since cos^2(x) = (cos(x))^2 = cos(x)*cos(x) or power(cos(x), 2) what is the problem ?
Calc has all of the trigonometic functions and the power function for squaring a base to an exponent. I just tried both forms in Calc. You don't need a function for cos^2(x) directly as it can be derived. This makes spreadsheet interfaces simpler.
If you measured the light intensity at evenly spaced intervals (for example, every 15 degrees) over a 180 degree or 360 degree range, then you could do a very simple Fourier transform to get the amplitude and phase of the cos^2(x), using the trigonometric identity cos^2(x)=0.5(cos(2*x)+1).
I will try to post a sample program in the near future.
# example-2016-12-07.py
# example program to fit cos^2(x) function with arbitrary phase and DC offset
# to equally spaced data (see comments below) using a two-point Fourier transform.
import math
# conversion factor
radian_per_degree = math.pi/180.0
# For accurate FT results, data must be uniformly distributed in x as follows:
# x spacing = 180/N degrees with N >= 3 points,
# or x spacing = 360/N degrees with N == 3 or N >= 5 points.
# Caution: Program does not check validity of data distribution in x.
# The following example data consists of phase shifted cos^2(x) + DC offset + noise.
# rawdata is given as a list of points. Each point is given as list of [angle, intensity].
rawdata = [[0.0, 13.2], [30.0, 20.8], [60.0, 20.6], [90.0, 11.3], [120.0, 2.9], [150.0, 4.8]]
# initialize sums to zero
num_pts = 0
c2x = 0.0
s2x = 0.0
dc = 0.0
# compute Fourier transform (only the 3 necessary coefficients)
for point in rawdata:
num_pts += 1
c2x += math.cos(radian_per_degree*2*point[0])*point[1]
s2x += math.sin(radian_per_degree*2*point[0])*point[1]
dc += point[1]
c2x *= 2.0/num_pts
s2x *= 2.0/num_pts
dc *= 1.0/num_pts
# print results to terminal
print("Curve is", c2x, "* cos(2*x) +", s2x, "* sin(2*x) +" , dc)
peak = math.atan2(s2x, c2x)/radian_per_degree/2
if peak < 0:
peak += 180.0
print("Estimated peak is at", peak, "degree")
# done
Run it in python3:
Code:
$ python3 example-2016-12-07.py
Curve is 0.9833333333333348 * cos(2*x) + 9.728352035845194 * sin(2*x) + 12.266666666666667
Estimated peak is at 42.11409808082147 degree
$
An alternative would be to look at R. Pinched from
Code:
library(nls2)
x <- c(0.00, 0.41, 0.76, 1.20, 1.55, 2.00, 2.34, 2.81, 3.10)
y <- c(0.99, 0.74, 0.49, 0.13, 0.05, 0.14, 0.51, 0.84, 0.98)
m <- nls2(y ~ a * (cos(x) * cos(x)), start=c(a=1))
cor(y,predict(m))
# Uncomment the next line for plotting
#png(file = "plot.png")
plot(x,y)
lines(x,predict(m),lty=2,col="red",lwd=3)
# Uncomment the next line for plotting
#dev.off()
Last edited by allend; 12-07-2016 at 02:00 PM.
Reason: Fixed the link
For what it's worth, "R" is a very important tool (open source) that you need to spend time getting to know.
I was recently involved in a project that used both SPSSŪ and SASŪ, and I was very pleasantly surprised to find that both of these now have well-integrated interfaces to "R."
R, like S, is designed around a true computer language, and it allows users to add additional functionality by defining new functions. Much of the system is itself written in the R dialect of S, which makes it easy for users to follow the algorithmic choices made. For computationally-intensive tasks, C, C++ and Fortran code can be linked and called at run time. Advanced users can write C code to manipulate R objects directly.
Many users think of R as a statistics system. We prefer to think of it of an environment within which statistical techniques are implemented. R can be extended (easily) via packages. There are about eight packages supplied with the R distribution and many more are available through the CRAN family of Internet sites covering a very wide range of modern statistics.
I think that it's popular partly because its approach is just a bit different. An upward-compatible language from (of course) "S," it enables you to build analyses with the full power of a programming language, instead of having to "piece together 'wedgies' that don't quite fit together" (my analogy) as you must do with most mainstream tools. In that project, I was able to use "just a little bit of 'R' juice" to replace some very-many-step processes, and yet I never had to leave the environment of the stats package I was then using because both of them had built "official" support for it.
As I did the project successfully without having to figure out the initial question, I just tried out the Python script now out of personal curiosity, and it was a great demonstration. Thank you to everyone for your answers and help!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.