Other *NIXThis forum is for the discussion of any UNIX platform that does not have its own forum. Examples would include HP-UX, IRIX, Darwin, Tru64 and OS X.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
So maybe some intermediate beast, more like windows 95.
Or like pretty much any linux distribution. A GUI on top of a shell. The website has some pretty poor English on it, so It's hard to tell for sure exactly what is meant by that seemingly contradictory statement.
You can test a operating system i am writing
i hope this is not considered spam
I took a look at your little OS smeezekitty. I'll be honest, its not very impressive right now, but its pretty impressive that you were able to write it, and that you were able to get a line to draw on the screen, although I couldn't get it to work. I say I like it, keep working on it, I'll definitely test out any new releases. I'm a QA Tester at work, so I'm good at testing.
Distribution: M$ Windows / Debian / Ubuntu / DSL / many others
Posts: 2,339
Rep:
Quote:
Originally Posted by prushik
I took a look at your little OS smeezekitty. I'll be honest, its not very impressive right now, but its pretty impressive that you were able to write it, and that you were able to get a line to draw on the screen, although I couldn't get it to work. I say I like it, keep working on it, I'll definitely test out any new releases. I'm a QA Tester at work, so I'm good at testing.
once the system calls work correctly it will work much better
for example child programs can use fopen, fread, fwrite, printf, etc.
right now i have a segment misalignment.
as for the interface its bad but low priority.
Last edited by smeezekitty; 10-22-2009 at 12:11 PM.
once the system calls work correctly it will work much better
for example child programs can use fopen, fread, fwrite, printf, etc.
right now i have a segment misalignment.
as for the interface its bad but low priority.
Ok, cool. Just to clarify, I did get it to boot, I just couldn't get the line to draw.
Anyways, I'm working on writing a program to test the speeds of different aspects of the various OS's discussed on this thread. I'm having trouble finding a good cross-platform way to do it, since these OS's are so different from each other, some of the code will have to be platform dependent, so it will take time to test them all. I will also have to run all these OS's natively, so I need to allocate a spare HD, and maybe a spare PC for these tests.
These are the aspects I am planning on testing:
Boot times
Printing text to the screen
Memory operations
File IO
Math operations
Is everyone happy with those tests? Any other suggestions? Are any of these invalid?
Thats actually a good question. It will be harder to accurately measure boot times.
Quote:
Originally Posted by smeezekitty
all should have a printf
pointers
Yes thats true.
Quote:
Originally Posted by smeezekitty
that one [File IO] is not so cross platform
Thats true, but that doesn't really matter because what I am trying to measure is how efficiently the OS can perform file IO
Quote:
Originally Posted by smeezekitty
basic + - / * is easy cos() tan() sin() pow() float
is not so cross platform
Again, its not so important that its cross platform because what I'm trying to test is the OS's implementation.
It is however important that the code I use to actually measure the time it takes be cross platform, otherwise it will be harder to compare. Thats what I meant before. I did a quick search for info about measuring execution time and the first example I came across was windows specific, which simply will not work.
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep:
Quote:
Originally Posted by prushik
I'm working on writing a program to test the speeds of different aspects of the various OS's discussed on this thread. I'm having trouble finding a good cross-platform way to do it, since these OS's are so different from each other, some of the code will have to be platform dependent, so it will take time to test them all.
That's a difficult problem that will take some thought. Especially with some of them being so different, and their intentions and purposes being so different. You'll have to come up with some standard tasks that are meaningful to you, and they're not likely to be the sort of standard benchmarks like transactions per second for financial database applications, or the various cpu benchmarks that do things like continuous fast Fourier transforms. Even with those, people get into arguments about a vendor testing their own system with properly optimized code, while they poorly represent their "competition" through lack of knowledge about proper optimization. Just about every vendor claims to have the best and fastest, and they have the tests to show it.
Just for example, I work with Solaris 10 on T5220 servers that have the Sun T2 chip with 8 core, 8 threads per core, 8 encryption accelerators, 8 floating point processors, and two 10GigE all on the T2 chip (to the OS it looks like 64 SPARC cpu's). You could install Ubuntu on this machine, but you would have trouble taking advantage of all the capabilities of the T2. If you use Solaris 10, and the Sun compilers (or GCCfss, which is GCC with the Sun compilers for a back end), and link the Sun cryptographic libraries, then you can have ssl or ssh connections with hardware encryption at wire speed. These systems are optimized for web2 types of environments. If you can break a problem down to parallel operations and write it with multithreaded code, they will perform. However, if you have an intense computational job that cannot be broken down into parallel components, or a lot of code that has simply not been written to take advantage of multithreading, then you might end up claiming that they suck. An everyday desktop PC with a 2.5GHz AMD processor could be faster for that job.
The same sort of thing applies to programming languages. Some are good at one things, others are good at something else. Fortran still rules for mathematical computations and simulations. LISP is still used for some AI work.
That's a difficult problem that will take some thought. Especially with some of them being so different, and their intentions and purposes being so different. You'll have to come up with some standard tasks that are meaningful to you, and they're not likely to be the sort of standard benchmarks like transactions per second for financial database applications, or the various cpu benchmarks that do things like continuous fast Fourier transforms. Even with those, people get into arguments about a vendor testing their own system with properly optimized code, while they poorly represent their "competition" through lack of knowledge about proper optimization. Just about every vendor claims to have the best and fastest, and they have the tests to show it.
Just for example, I work with Solaris 10 on T5220 servers that have the Sun T2 chip with 8 core, 8 threads per core, 8 encryption accelerators, 8 floating point processors, and two 10GigE all on the T2 chip (to the OS it looks like 64 SPARC cpu's). You could install Ubuntu on this machine, but you would have trouble taking advantage of all the capabilities of the T2. If you use Solaris 10, and the Sun compilers (or GCCfss, which is GCC with the Sun compilers for a back end), and link the Sun cryptographic libraries, then you can have ssl or ssh connections with hardware encryption at wire speed. These systems are optimized for web2 types of environments. If you can break a problem down to parallel operations and write it with multithreaded code, they will perform. However, if you have an intense computational job that cannot be broken down into parallel components, or a lot of code that has simply not been written to take advantage of multithreading, then you might end up claiming that they suck. An everyday desktop PC with a 2.5GHz AMD processor could be faster for that job.
The same sort of thing applies to programming languages. Some are good at one things, others are good at something else. Fortran still rules for mathematical computations and simulations. LISP is still used for some AI work.
Anyway, just some food for thought.
I was thinking of doing very simple tasks that programs do many times per execution. So that I can write code that is as cross-platform as possible, I think its important that the code be as similar as possible from one OS to the next so that they can be compared as accurately as possible. They will all be tested on real hardware and all on the same hardware, I have a few spare laptops that I could use, or I could use this laptop, or bring out one of my desktops, although I would need to borrow a monitor. The point is to compare how long it takes various OS's to do certain tasks. And to be as thorough as possible, if I conclude that OS I read from memory faster than OS J, so OS I is better, I'm sure a hundred people will tell me that OS J is for browsing the web and therefore OS J is better (by the way, I made up OS I and OS J).
So anyway, do you think that I should be testing transactions per second or something else of the sort? I think it makes more sense to test lower level operations, since none of these OS's are designed for financial applications and many do not have database applications. The more code that I write, the more room for error on my part I think.
So, I feel like I'm getting off track and maybe not making sense anymore. sorry.
Do you have any suggestions for standard tasks that are meaningful to you? I don't want to be selfish here, if there is something specific that you think should be tested, let me know and I'll see what I can do.
Distribution: M$ Windows / Debian / Ubuntu / DSL / many others
Posts: 2,339
Rep:
Quote:
Originally Posted by prushik
I was thinking of doing very simple tasks that programs do many times per execution. So that I can write code that is as cross-platform as possible, I think its important that the code be as similar as possible from one OS to the next so that they can be compared as accurately as possible. They will all be tested on real hardware and all on the same hardware, I have a few spare laptops that I could use, or I could use this laptop, or bring out one of my desktops, although I would need to borrow a monitor. The point is to compare how long it takes various OS's to do certain tasks. And to be as thorough as possible, if I conclude that OS I read from memory faster than OS J, so OS I is better, I'm sure a hundred people will tell me that OS J is for browsing the web and therefore OS J is better (by the way, I made up OS I and OS J).
So anyway, do you think that I should be testing transactions per second or something else of the sort? I think it makes more sense to test lower level operations, since none of these OS's are designed for financial applications and many do not have database applications. The more code that I write, the more room for error on my part I think.
So, I feel like I'm getting off track and maybe not making sense anymore. sorry.
Do you have any suggestions for standard tasks that are meaningful to you? I don't want to be selfish here, if there is something specific that you think should be tested, let me know and I'll see what I can do.
how about program execution time?
where a main program keeps starting and killing a small program and times how long it takes to run x number of times
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep:
Quote:
Originally Posted by prushik
do you think that I should be testing transactions per second or something else of the sort? I think it makes more sense to test lower level operations, since none of these OS's are designed for financial applications and many do not have database applications. The more code that I write, the more room for error on my part I think.
Right.
Quote:
Originally Posted by prushik
Do you have any suggestions for standard tasks that are meaningful to you? I don't want to be selfish here, if there is something specific that you think should be tested, let me know and I'll see what I can do.
You just have to ask yourself what you want to use these OSes for. Then test them on that. If you write a small program with just a few lines of code and loop over it as a test, that may not be testing the OS. It may end up getting compiled, loaded into memory, and just testing the CPU, or the efficiency of the compiler, which may be the same (say, if you are using gcc on linux to compile code for OS I or J).
If you are interested in an OS that can boot up quickly, open some windows quickly, do some graphics, copy some files, access the web, and so on, then test those. Those are going to be more complex programs that exercise the OS and its API's more than a simple little program. Make it a real task that means something to you and that you can time. You may even decide to test user interface issues (gee, I can do this in 2 steps on this OS, but it takes me 5 steps to do the same thing on that OS). You might separate those into several tests.
there's also FreeDos and there's development ongoing with the OpenGem environment for it. i mention it because i have a few dos apps that are still useful (such as an SVGA video test pattern generator program i wrote back when i was doing a lot of crt monitor repairs)
Code:
FreeDOS is a free DOS-compatible operating system.
* Easy multiboot with Win95-2003 and NT/XP/ME
* FAT32 file system and large disk support (LBA)
* LFN support via DOSLFN driver
* XDMA & XDVD - UDMA driver for hard discs and DVD players
* LBACACHE - disk cache
* Memory Managers: JEMM386 (XMS, EMS,...)
o possibility of writing 32-bit protected mode drivers (JLMs=Jemm Loadable Module)
* SHSUCDX (MSCDEX replacement) and CD-ROM driver (XCDROM)
* CUTEMOUSE - Mouse driver with scroll wheel support
* FDAPM - APM info/control/suspend/poweroff, ACPI throttle, HLT energy saving...
* MPXPLAY - media player for mp3, ogg, wmv... with built-in AC97 and SB16 drivers; has a user interface
* 7ZIP, INFO-ZIP zip & unzip... - modern archivers are available for DOS
* EDIT / SETEDIT - multi window text editors
* HTMLHELP - help viewer, can read help directly from a zip file
* PG - powerful text viewer (similar to V. D. Buerg's LIST)
* many text mode programs ported from Linux thanks to DJGPP
* FreeCOM - command line, supports file completation
* 4DOS can be installed, which is an enhanced command line.
* GRAPHICS - greyscale hardcopy on ESC/P, HP PCL and PostScript printers
* Arachne - a graphical web browser and e-mail client
* Fdupdate - updates installed FreeDOS from internet server
* bit torrent client
* anti-virus / virus scanner
so it's a replacement for dos, but it's got a lot of new things never even considered do-able with dos. i might try it. i'm sure my wife will be happy to see some of her old recipes that are buried deep in an old archive on her HDD (and aren't readable without the old dos program that created the files, and doesn't like XP).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.