LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices

Reply
 
Search this Thread
Old 03-23-2011, 01:16 PM   #1
xtothat
Member
 
Registered: Sep 2008
Location: Middle of Nowhere, England
Distribution: Slackware 14.1, Ubuntu 13.10
Posts: 39

Rep: Reputation: 15
SDL code optimisation / change of library for low end systems.


Hi all,

I have a question about choice of library to be used on low-end systems.

I've so far been using SDL to create a user interface for a media system, and it's been working fantastically on my development machine, a dual core 2GHz machine with 2Gb RAM and an Intel graphics card. I get a frame rate of up to 100fps, and there is no slowdown at all. However when I run the exact same program on an embedded motherboard, the VIA EPIA EN15000G, which has a 1.5GHz processor, and a VIA P4M800 graphics chip (possibly the worst graphics chip ever invented as far as Linux support goes), I get a frame rate of less than 5fps. This makes the entire program totally unusable, and I've been left in a bit of a pickle. I've looked into alternatives like SFML, but that looks like it won't help at all on an unaccelerated system. I've also tried using OpenChrome drivers on it, and that made a negligible or nonexistent difference.

Can anybody suggest anything I could look into re: optimising my code or changing libraries goes? I already use SDL_DisplayFormatAlpha to convert my surfaces to the correct format. I haven't looked into Dirty Rectangles yet massively, as there's not much movement at all on the GUI, just the changing of button states.

Would use of OpenGL help at all if using the OpenChrome driver, which apparently has 2D acceleration? I'm more than willing to totally overhaul the code if needs be.

Any help at all would be appreciated massively, as I'm a bit stumped, and floundering a little!

X-T

Last edited by xtothat; 03-23-2011 at 01:17 PM. Reason: Improving politeness :-)
 
Old 03-24-2011, 10:43 AM   #2
tuxdev
Senior Member
 
Registered: Jul 2005
Distribution: Slackware
Posts: 2,011

Rep: Reputation: 110Reputation: 110
DisplayFormatAlpha helps a little, but it's nowhere near the best technique for performance on non-accelerated chips. Look at your art assets to check if you really need alpha blending. If you don't really need alpha blending, you can convert on-the-fly to a colorkey format. There might be some other optimization opportunities, but it's impossible to tell without the actual code.
 
Old 03-24-2011, 11:07 AM   #3
xtothat
Member
 
Registered: Sep 2008
Location: Middle of Nowhere, England
Distribution: Slackware 14.1, Ubuntu 13.10
Posts: 39

Original Poster
Rep: Reputation: 15
Yeah, I've already used DisplayFormatAlpha in my loadImages function. I seem to in fact get better FPS by a couple with that than just DisplayFormat.

I've actually done a chunk more investigation into it. Looks like my frame rates were wrong on my dual core machine, and I'm getting nothing like what I thought I was. I've started with nothing but the Code::Blocks sample SDL app. It appears that I'm only getting 20-70fps on a dual core machine with accelerated Intel 4-Series graphics, which is ridiculously low!

I've added a couple of lines to it, to test the FPS, and to delay briefly between frames, which appears to lighten the load very slightly, but to no great extent.

Just in case my maths is wrong, or I've got the completely wrong end of the stick, I'm using the algorithm 1000 / (SDL_GetTicks() - ticks). ticks is reset at the beginning of every frame, and then the FPS is displayed right before the end of the main loop. Also, displaying a single image on screen at this rather pathetic speed is using an absurd 40% of my CPU!

It's beginning to look like my code's ok, but SDL itself is horrible. I don't want to believe it, but when I'm facing this evidence, it's a little damning!

I've posted the full code for this at the end of this post. Any help with this would be hugely appreciated. I'm lost as to what could be causing it! Is SDL just that slow, or is there something I'm not understanding?

X-T

Code:
#ifdef __cplusplus
    #include <cstdlib>
#else
    #include <stdlib.h>
#endif
#ifdef __APPLE__
#include <SDL/SDL.h>
#else
#include <SDL.h>
#endif
#include "timer.h"
#include <iostream>
using namespace std;
int ticks = 0;

int main ( int argc, char** argv )
{
    // initialize SDL video
    if ( SDL_Init( SDL_INIT_VIDEO ) < 0 )
    {
        printf( "Unable to init SDL: %s\n", SDL_GetError() );
        return 1;
    }

    // make sure SDL cleans up before exit
    atexit(SDL_Quit);

    // create a new window
    SDL_Surface* screen = SDL_SetVideoMode(640, 480, 16,
                                           SDL_HWSURFACE|SDL_DOUBLEBUF);
    if ( !screen )
    {
        printf("Unable to set 640x480 video: %s\n", SDL_GetError());
        return 1;
    }

    // load an image
    SDL_Surface* bmp = SDL_LoadBMP("cb.bmp");
    if (!bmp)
    {
        printf("Unable to load bitmap: %s\n", SDL_GetError());
        return 1;
    }

    // centre the bitmap on screen
    SDL_Rect dstrect;
    dstrect.x = (screen->w - bmp->w) / 2;
    dstrect.y = (screen->h - bmp->h) / 2;

    // program main loop
    bool done = false;
    while (!done)
    {
        ticks = SDL_GetTicks();
        // message processing loop
        SDL_Event event;
        while (SDL_PollEvent(&event))
        {
            // check for messages
            switch (event.type)
            {
                // exit if the window is closed
            case SDL_QUIT:
                done = true;
                break;

                // check for keypresses
            case SDL_KEYDOWN:
                {
                    // exit if ESCAPE is pressed
                    if (event.key.keysym.sym == SDLK_ESCAPE)
                        done = true;
                    break;
                }
            } // end switch
        } // end of message processing

        // DRAWING STARTS HERE

        // clear screen
        SDL_FillRect(screen, 0, SDL_MapRGB(screen->format, 0, 0, 0));

        // draw bitmap
        SDL_BlitSurface(bmp, 0, screen, &dstrect);

        // DRAWING ENDS HERE

        // finally, update the screen :)
        SDL_Flip(screen);

        cout << 1000 / (SDL_GetTicks() - ticks) << endl;

        SDL_Delay(5);

    } // end main loop

    // free loaded bitmap
    SDL_FreeSurface(bmp);

    // all is well ;)
    printf("Exited cleanly\n");
    return 0;
}

Last edited by xtothat; 03-24-2011 at 11:11 AM. Reason: Added code to post
 
Old 03-24-2011, 01:41 PM   #4
SigTerm
Member
 
Registered: Dec 2009
Distribution: Slackware 12.2
Posts: 379

Rep: Reputation: 233Reputation: 233Reputation: 233
Well...

First, if you do something like this:
Quote:
Originally Posted by xtothat View Post
(SDL_GetTicks() - ticks)
You'll get divide by zero eventually (I saw this happening in real life). Timer has a low granularity on many platforms (~10 msec), so you get the idea.
Make sure that difference is greater than zero.

Quote:
Originally Posted by xtothat View Post
It appears that I'm only getting 20-70fps on a dual core machine with accelerated Intel 4-Series graphics, which is ridiculously low!
Dual core doesn't exactly matter. For example, if in your system vsync is enabled, fps won't be higher than monitor refresh rate. Also, transfer between system and video memory may be slow.

Anyway, here are a few test results:
Code:
#include <cstdlib>
#include <SDL/SDL.h>
#include <iostream>
using namespace std;

int main ( int argc, char** argv ){
    if(SDL_Init( SDL_INIT_VIDEO ) < 0){
        printf( "Unable to init SDL: %s\n", SDL_GetError() );
        return 1;
    }

	// make sure SDL cleans up before exit
	atexit(SDL_Quit);

	// create a new window
	SDL_Surface* screen = SDL_SetVideoMode(640, 480, 16,
		SDL_HWSURFACE|SDL_DOUBLEBUF|SDL_FULLSCREEN);
	if(!screen){
		printf("Unable to set 640x480 video: %s\n", SDL_GetError());
		return 1;
	}

	// load an image
	SDL_Surface* bmp = SDL_LoadBMP("cb.bmp");
	if (!bmp){
		printf("Unable to load bitmap: %s\n", SDL_GetError());
		return 1;
	}
	SDL_Surface* hwBmp = SDL_CreateRGBSurface(SDL_HWSURFACE, bmp->w, bmp->h, 32, 0, 0, 0, 0);
	SDL_BlitSurface(bmp, 0, hwBmp, 0);

	// centre the bitmap on screen
	SDL_Rect dstrect;
	dstrect.x = (screen->w - bmp->w) / 2;
	dstrect.y = (screen->h - bmp->h) / 2;

	// program main loop
    bool done = false;
    Uint32 lastTick = SDL_GetTicks(), accumTicks = 0;
	int frames = 0;
	while (!done){
		SDL_Event event;
		while (SDL_PollEvent(&event)){
			switch (event.type){
				case SDL_QUIT:
					done = true;
					break;
				case SDL_KEYDOWN:{
					if (event.key.keysym.sym == SDLK_ESCAPE)
						done = true;
					break;
				}
			}
		}

        SDL_FillRect(screen, 0, SDL_MapRGB(screen->format, 0, 0, 0));

		//SDL_BlitSurface(bmp, 0, screen, &dstrect);
		SDL_BlitSurface(hwBmp, 0, screen, &dstrect);

		SDL_Flip(screen); 

		Uint32 curTick = SDL_GetTicks();
		Uint32 tickDiff = curTick - lastTick;
		lastTick = curTick;
		frames++;
		accumTicks += tickDiff;
		if (frames && accumTicks && ((accumTicks > 250) || (frames > 10))){
			float fps = 1000.0f*frames/(float)accumTicks;
			cout << fps << endl;
			accumTicks = 0;
			frames = 0;
		}

		//SDL_Delay(5);
	}

	SDL_FreeSurface(bmp);

	printf("Exited cleanly\n");
	return 0;
}
On windows dualcore machine with GeForce GPU:
  • If there is nothing to do, and only SDL_Flip is being called (i.e. SDL_BlitSurface + SDL_FillRect are commented out), you get 2200 "frames" per second. This is a maximum theoretically possible framerate on this machine.
  • If SDL_Flip + SDL_FillRect are called (SDL_BlitSurface is commented out), you get 913 fps.
  • If SDL_Flip + SDL_FillRect + SDL_BlitSurface are called, you get roughly 350 fps in 16bit and 32bit mode, and 43 fps in 24bit mode (apparently it enables vsync accidentally). There is no performance difference between software/hardware surfaces.
  • According to profiler results, SDL_BlitSurface is the slowest call.
Frankly, for such resolution 350 fps is a pathetic result (I would expect around 700 frames per second), so I suspect there is some kind of problem with SDL_BlitSurface implementation - maybe it doesn't use hardware acceleration properly. Hardware surface blitting should work faster than that (if I remember correctly).

In your situation I'd advise to try using OpenGL if you want hardware accelerated rendering, and use SDL without OpenGL only if you're making software renderer (raytracer, for example), and need some kind of frontend to blit software surfaces onto screen. At least OpenGL in SDL has vsync control (SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, 0)). However, I cannot guarantee that this will help you.

Last edited by SigTerm; 03-24-2011 at 01:43 PM.
 
1 members found this post helpful.
Old 03-24-2011, 02:15 PM   #5
xtothat
Member
 
Registered: Sep 2008
Location: Middle of Nowhere, England
Distribution: Slackware 14.1, Ubuntu 13.10
Posts: 39

Original Poster
Rep: Reputation: 15
Hi!

Many thanks for your excellent answer!

Yeah, the tick counter was just a two second implementation for the sake of testing this, so I didn't see it as massively important to watch out for it. Now that your results have come in looking a little shady, I reckon I'll probably give up on SDL, and maybe try out a few more options. The machine the program is destined for is a 1.5GHz processor with a godawful GPU (VIA), so I reckon that my developments in SDL are an exercise in futility. There's no real HW accel for anything from VIA, so I don't think I'll try OpenGL, as much as I'd like to start playing with it. I'll probably try GTK and Qt, to see how they do. They're just a little more bloaty than I need. Still, beggars can't be choosers!

Yet again, thanks loads for your answer!

X-T
 
Old 03-24-2011, 02:53 PM   #6
SigTerm
Member
 
Registered: Dec 2009
Distribution: Slackware 12.2
Posts: 379

Rep: Reputation: 233Reputation: 233Reputation: 233
Quote:
Originally Posted by xtothat View Post
I reckon I'll probably give up on SDL, and maybe try out a few more options.
A few more bits of info.
From my experience, SDL has its' uses. SDL is fine for providing cross-platform OpenGL initialization, cross-platform input (joystick/keyboards/mice), full-screen mode support, and basic support for cross-platform threading. I.e. it is mostly useful for making or porting 3D games (UT2004 and Doom 3 linux ports used SDL, for example). However, its' 2D capabilities looks weak to me (compared to what I remember about IDirectDraw7 on Windows platform) and doesn't look like library's best feature. In other words, SDL has its' strong points, but 2D surface blitting doesn't look like one of them - to me it looks like SDL's 2D facilities are mostly useful when you want to make some kind of full-software renderer (raytracer, voxel engine), or need to port something from dos era (direct screen access).

Quote:
Originally Posted by xtothat View Post
There's no real HW accel for anything from VIA, so I don't think I'll try OpenGL, as much as I'd like to start playing with it. I'll probably try GTK and Qt, to see how they do.
I think most chips nowadays should support HW accel at least for basic 2D rasterization (DirectX 7 Level - that was present on Riva TNT 2 PRO), but I cannot guarantee that this feature is present on your hardware or is supported by OS.

From GTK/Qt Qt might be a very good choice - if you're using C++.
 
1 members found this post helpful.
Old 03-24-2011, 03:54 PM   #7
tuxdev
Senior Member
 
Registered: Jul 2005
Distribution: Slackware
Posts: 2,011

Rep: Reputation: 110Reputation: 110
It's a well known that for most platforms SDL_HWSURFACE means nothing and you'll get a software surface anyway. It's not an issue because even on platforms that do support SDL_HWSURFACE, software surfaces are actually superior. Ensuring that the data is in the right format is *the* biggest thing that affects 2D SDL performance. Colorkeying with RLE acceleration is really effective, because the fastest pixels to draw are the ones you don't.
 
1 members found this post helpful.
Old 03-25-2011, 12:50 PM   #8
xtothat
Member
 
Registered: Sep 2008
Location: Middle of Nowhere, England
Distribution: Slackware 14.1, Ubuntu 13.10
Posts: 39

Original Poster
Rep: Reputation: 15
Thanks guys. I am working in C++, so I think I'm going to go with Qt. I've got a fair bit of experience with PyQt, so I'll probably have the most luck with it.
 
  


Reply

Tags
c++, libraries, opengl, sdl


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
SDL static library, shared library icecubeflower Programming 6 04-09-2009 03:10 PM
LXer: Lenovo embeds Linux in high-end and low-end notebooks LXer Syndicated Linux News 0 08-10-2006 11:21 AM
SDL is installed, included and linked, but will not compile SDL code mansizerooster Programming 10 05-31-2006 04:18 AM
Linux and Apache on low end systems jpr9898 Linux - Newbie 12 03-02-2004 05:12 PM
Cant init SDL library bignester Programming 3 07-08-2003 03:32 PM


All times are GMT -5. The time now is 01:42 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration