GeneralThis forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
We know that 1 = 2.. because we can have one 2.. It's just how do you teach a computer that 1 = 2 & 3 & 4 & 5 and so on, without the computer needing to see a 2 as a one..?
I'm no mathematician so perhaps I'm missing something here, but I don't really understand what the fuss was about (at least in the wikipedia article that was mentioned above).
Saying that 1/3 = .33 or .333 or .33333 is an approximation at best and therefore the equal sign should NOT stand between both sides. For the same reason, it can't be the starting point for any further deductions.
We can say that 1/3 = .(3) (which makes all the following operations impossible to start with)
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195
Rep:
Quote:
Originally Posted by sycamorex
I'm no mathematician so perhaps I'm missing something here, but I don't really understand what the fuss was about (at least in the wikipedia article that was mentioned above).
Saying that 1/3 = .33 or .333 or .33333 is an approximation at best and therefore the equal sign should NOT stand between both sides. For the same reason, it can't be the starting point for any further deductions.
The problem with these "proofs" is that a number of mathematic rules are applied, either partially correct or ignoring some other ones. The ones that are applied are trivial and logical for everyone, the ones ignored mostly unknown and/or non-trivial for the general public. Often the "=" sign is used incorrectly, or multiplications are performed on both sites of an equation when that is not allowed. Altough sometimes nice to see stunning outcomes (I remember one such a proof for 1 = -1) it usually is a demonstration of not mastering mathematics sufficiently if you take such proofs serious.
The problem with these "proofs" is that a number of mathematic rules are applied, either partially correct or ignoring some other ones. The ones that are applied are trivial and logical for everyone, the ones ignored mostly unknown and/or non-trivial for the general public. Often the "=" sign is used incorrectly, or multiplications are performed on both sites of an equation when that is not allowed. Altough sometimes nice to see stunning outcomes (I remember one such a proof for 1 = -1) it usually is a demonstration of not mastering mathematics sufficiently if you take such proofs serious.
jlinkels
That way a clever mathematician (are there any dumb ones?) can prove (to an average person) almost anything by cherrypicking mathematical rules that suit him/her. I guess it can also be true of other branches of science where sometimes conclusions may depend on a scientist's *interpretation* of data.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.