Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a streaming and recording script for an embedded ARM7 SoC. It's done under Yocto Project. The issue is the intense CPU usage around 97-100% for 1 core out of 4. Do you have any idea how could I distribute the streaming task over the 4 cores? I've been testing with -threads 4 ffmpeg option, with not much success. I double checked the Yocto recipie file, and ffmpeg was compiled with multi threading feature and to arm7 architecture.
here is the script:
Code:
#!/bin/bash
if [ ! -e $(uci get application.audio.ffserver_config) ];then
cat > $(uci get application.audio.ffserver_config) <<EOF
HTTPPort $(uci get application.audio.streaming_port)
HTTPBindAddress 0.0.0.0
MaxHTTPConnections $(uci get application.audio.max_http_conns)
MaxClients $(uci get application.audio.max_clients)
MaxBandwidth $(uci get application.audio.max_bandwidth)
CustomLog $(uci get application.audio.ffserver_log)
<Feed audio.ffm>
File /tmp/audio.ffm
FileMaxSize 32M
</Feed>
<Stream audio>
Feed audio.ffm
Format wav
AudioCodec pcm_s16le
AudioBitRate 256
AudioChannels 2
AudioSampleRate $(uci get application.audio.streaming_sample)
NoVideo
StartSendOnKey
</Stream>
<Stream stat.html>
Format status
</Stream>
EOF
fi
#
ffserver -f $(uci get application.audio.ffserver_config) &
#
sleep 1
mkdir -p $(uci get application.audio.recording_path) || exit 1
mkdir -p $(uci get application.audio.recording_path)/$(date '+%Y-%m-%d') || exit 1
ffmpeg -f alsa -i hw:0,0 -acodec pcm_s16le -f segment -strftime 1 -segment_time $(uci get application.audio.duration) -segment_format wav -ar $(uci get application.audio.recording_sample) $(uci get application.audio.recording_path)/$(date '+%Y-%m-%d')/%Y-%m-%d__%H_%M_$(uci get application.audio.recording_sample)_$(sed 's/://g' /sys/class/net/eth0/address).wav -shortest http://localhost:$(uci get application.audio.streaming_port)/audio.ffm
if it is the one thread that is high cpu then it will be bound to the one cpu
I guess the scheduler could share the load, but I imagine that will just switch through the cores
I wouldn't expect wav encoding to be cpu intensive, but I have no idea what that SoC is
you could try adding audio bitrate/samplerate to the ffmpeg
I'm not certain if the application.audio.ffserver_config influences ffmpeg or if it just provides metadata to client
That's not a valid statement.
Please do some troubleshooting, see if you can tickle ffmpeg to tell you why it doesn't do as you told it.
Yes, you're right. I meant that I was debugging with inserting the -threads 4 to several places in the ffmpeg line, just after the -i option, before it, etc.
I seems it does not have any effect on how ffmpeg works internally.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.