LinuxQuestions.org Member Success StoriesJust spent four hours configuring your favorite program? Just figured out a Linux problem that has been stumping you for months?
Post your Linux Success Stories here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Running TOP shows that plasmashell was just eating CPU. The culprit, it seems, is a long-standing KDE bug that apparently rears its ugly head every now and then.
...from a terminal, and it *SHOULD* drop back to normal. The problem, it seems, is with the notification icon(s) in your system tray...get one, or have one with an animation (weather? Network? New mail?), and you get the spike.
Editing the /usr/share/plasma/plasmoids/org.kde.plasma.notifications/contents/ui/NotificationIcon.qml file *seems* to do the trick (your mileage may vary). Look for this section:
Code:
PlasmaComponents.BusyIndicator {
anchors.fill: parent
visible: jobs ? jobs.count > 0 : false
running: active
}
I fixed it by killing randomly one of plasmashell threads within the process. I didn't killed plasma itself, desktop works, but cpu got down and all remaining high cpu threads were killed at the same time as well (there were about 10-15 in htop visible).
Could you refer the bug related to this issue if it is in progress?
Anyway thanks for your guideline, which I will try to test next time as now my random thread kill worked fine :-)
Running TOP shows that plasmashell was just eating CPU. The culprit, it seems, is a long-standing KDE bug that apparently rears its ugly head every now and then.
...from a terminal, and it *SHOULD* drop back to normal. The problem, it seems, is with the notification icon(s) in your system tray...get one, or have one with an animation (weather? Network? New mail?), and you get the spike.
Editing the /usr/share/plasma/plasmoids/org.kde.plasma.notifications/contents/ui/NotificationIcon.qml file *seems* to do the trick (your mileage may vary). Look for this section:
Code:
PlasmaComponents.BusyIndicator {
anchors.fill: parent
visible: jobs ? jobs.count > 0 : false
running: active
}
# Uncomment this to enable the lirc-rpi module
# Leave it commented if there is no infrared
#dtoverlay=lirc-rpi
#NO MEMORY-SPLIT!
#gpu_mem=64
# Additional overlays and parameters are documented /boot/overlays/README
dtoverlay=i2c-rtc,ds1307
# Enable audio (loads snd_bcm2835)
dtparam=audio=on
#NO OVERCLOCKING
# DRIVER FOR GL2
dtoverlay=vc4-fkms-v3d
/usr/bin/zram.sh
Code:
#!/bin/bash
cores=$(nproc --all)
modprobe zram num_devices=$cores
swapoff -a
totalmem=`free | grep -e "^Mem:" | awk '{print $2}'`
# mem=$(( 1024 * ($totalmem / $cores) )) is not working with my shell
# install bc! without bc the calculation fails
mem=$(echo "scale=0;2048*($totalmem/$cores)" | bc)
core=0
while [ $core -lt $cores ]; do
echo $mem > /sys/block/zram$core/disksize
mkswap /dev/zram$core
swapon -p 5 /dev/zram$core
core=$((core+1))
# let core=core+1 is not working with my shell
done
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.