ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I know that you can trash your file system using either of those commands (in DOS/UNIX-like systems respectively), I was just curious as to whether actual physical H/W damage could occur from poorly written software,
It used to be that you could smoke a monitor by sending it signals that would cause some sort of oscillations at frequencies it which it was not intended to run (allegedly; I've never seen it actually happen). I doubt whether modern monitors, especially LCDs, would be subject to the same possibilities.
I think some wireless cards have the capability to exceed legal power limits, which potentially could happen and bring you to the attention of 'the authorities'. Not likely to happen, though.Flooding a LAN with traffic is a possibility, but unless there are other devices that control machinery on the network, it seems unlikely or impossible to cause any physical damage.
There may be some potential additional wear and tear to disk drives if they were inadvertently put into a loop seeking across the platter, but that seems fairly low risk.
The one thing that comes to mind, and I don't really know the details, is the possibility to misprogram something like the CPU clock speed or other environmental aspects such as fan control.
--- rod.
Distribution: M$ Windows / Debian / Ubuntu / DSL / many others
Posts: 2,339
Rep:
Quote:
Originally Posted by theNbomr
It used to be that you could smoke a monitor by sending it signals that would cause some sort of oscillations at frequencies it which it was not intended to run (allegedly; I've never seen it actually happen). I doubt whether modern monitors, especially LCDs, would be subject to the same possibilities.
all mointers since around 2004 have auto-shutoff for bad frequncies!
Quote:
I think some wireless cards have the capability to exceed legal power limits, which potentially could happen and bring you to the attention of 'the authorities'.
'the authorities' are more concerned about the illegal radio station operator!!
[qupte]
Flooding a LAN with traffic is a possibility, but unless there are other devices that control machinery on the network, it seems unlikely or impossible to cause any physical damage.
[/quote]
most things shut off on bad signals just so these things dont happen.
[quote]
Quote:
There may be some potential additional wear and tear to disk drives if they were inadvertently put into a loop seeking across the platter, but that seems fairly low risk.
there would be far more wear / tear if you spun it up then spun it down then up again over and over.
Quote:
The one thing that comes to mind, and I don't really know the details, is the possibility to misprogram something like the CPU clock speed or other environmental aspects such as fan control.
IMHO, the part of your first post that can make it dangerous is about programming hardware. If you are just learning assembler language, maybe you should stick to writing userland programs, run by a normal user. Create a new user for this. That way even your regular home directory will be safe.
IMHO, the part of your first post that can make it dangerous is about programming hardware.
Would programming this way under an emulator such as QEMU still be just as dangerous? I wouldn't think so, because none of the instructions are being executed natively by the processor. Hopefully if anything goes wrong, the emulator should simply report an error, and there should be no unusual sounds coming from the tower case
I'm guessing virtualization wouldn't work, because it isn't full emulation (i.e. most instructions are still being executed natively), so I think emulation would be the way to go if you're writing an OS from scratch or something similar.
One of the more interesting findings in the software-development world (but it has been abundantly proven...) is that compilers can produce considerably better code than human beings can.
Today's microprocessor architectures really aren't designed for human-generated code. They're designed for the code that's produced by optimizing compilers. Intel works very closely with the producers of those compilers (specifically including "your friend and mine, gcc") to help them make their compilers produce really good code. You won't do better than they do.
Focus, instead, upon "a good, clean algorithm, well-expressed in your language of choice."
Distribution: M$ Windows / Debian / Ubuntu / DSL / many others
Posts: 2,339
Rep:
Quote:
Originally Posted by sundialsvcs
One of the more interesting findings in the software-development world (but it has been abundantly proven...) is that compilers can produce considerably better code than human beings can.
not the case for turbo C++ LOL.
for example:
Code:
push bp
mov sp,bp
dec sp
dec sp
dec sp
dec sp
dec sp
mov ax,offset DGROUP:@s+2
push ax
call _data2
mov [bp-2],4
xor dx,dx
@loc_001@:
inc dx
cmp dx,8
jl short @loc_001@
push bp
mov sp,bp
dec sp
dec sp
dec sp
dec sp
dec sp
mov ax,offset DGROUP:@s+2
push ax
call _data2
mov [bp-2],4
xor dx,dx
@loc_001@:
inc dx
cmp dx,8
jl short @loc_001@
Unless you've timed the code, you never know.
For example, I thought that
Code:
for(i = 0; i < LIMIT; i++)
{
a[i] = f1(i, <other_args_for_a>);
b[i] = f2(i, <other_args_for_b>); // f1 and f2 are very similar
}
would be better than
Code:
for(i = 0; i < LIMIT; i++)
{
a[i] = f1(i, <other_args_for_a>);
}
for(i = 0; i < LIMIT; i++)
{
b[i] = f2(i, <other_args_for_b>); // f1 and f2 are very similar
}
because in the former case index is incremented only once for both 'a', 'b', but the compiler proved me wrong, i.e. the second snippet was faster (10-20%, I don't remember exactly, but consistent and noticeable).
Get familiar (if not yet) with, say,
math-atlas.sf.net
- there is a whole bunch of pre-prepared code snippets, and the tools builds most optimal implementation for one's particular HW (CPU, cache, RAM).
The point is that due to pipelining sometimes for the fastest code one does need apparently useless/unnecessary instructions.
One of the more interesting findings in the software-development world (but it has been abundantly proven...)
Lots of false statements have been abundantly "proven".
Quote:
is that compilers can produce considerably better code than human beings can.
On any key inner loop, no compiler ever produces better code than I write in asm. I'm out of practice and haven't kept up with the latest timing details in x86_64 CPU performance. So I'm sure a few other humans can do much better than I can. But when performance really matters, I still try my hand at skilled asm coding against the best available compiler and I always win.
In the early 70's people were saying the best Fortran compilers beat the best asm programmers on complicated pipelined super computers. It wasn't close to true. People have been saying (and "proving") similar things about other compilers and architectures ever since. It has never been true. A good asm programmer still consistently beats the best compiler on the code where performance matters most.
Programs are often too big to write even the performance critical parts in asm. You also usually hope your code will outlive the current generation of CPU models, and then recompiling will be a lot easier than rewriting. So writing the inner loops in asm is almost never a good idea. But not because you wouldn't be able to do a better job than any compiler.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.