LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices


Reply
  Search this Thread
Old 04-25-2012, 03:56 AM   #1
Aquarius_Girl
Senior Member
 
Registered: Dec 2008
Posts: 4,731
Blog Entries: 29

Rep: Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940
What can be the real disadvantages of disabling SMI/SMM?


From here: http://en.wikipedia.org/wiki/System_...ent_Mode#Usage
Quote:
Some uses of SMM are:
  • Handle system events like memory or chipset errors.
  • Manage system safety functions, such as shutdown on high CPU temperature and turning the fans on and off.
  • Security functions, such as flash device lock down require SMM support on some chipsets.
  • Deeper sleep power management support on Intel systems.
  • Control power management operations, such as managing the voltage regulator modules.
  • Emulate motherboard hardware that is unimplemented or buggy.
  • Emulate a PS/2 mouse or keyboard by converting the messages from USB versions of those peripherals to the messages that would have been generated had PS/2 versions of such hardware been connected.
  • Centralize system configuration, such as on Toshiba and IBM notebook computers.
  • Breaking into SMM to run high-privileged rootkits as shown at Black Hat 2008.[1]
  • Emulate or forward calls to a Trusted Platform Module (TPM).[2]
Last time I configured the kernel, I found there an
option to disable SMI.

So, if we disable SMI will that disable SMM automatically?
Wikipedia or Google doesn't seem to be talking much
about SMI much.
  • Secondly, what kind of real harm can disabling SMI/SMM
    do to a x86/64 processor?
    Can it cause a real fire or something? Please provide
    links so that I can study more OR some keywords which
    I can search in Google to get more on "disadvantages of
    disabling SMI/SMM".
  • Besides this, how should I check whether my system is
    robust enough to manage without SMI/SMM?
    What kind of configurations are needed for that?
It is all in relation with Xenomai (the real time things and all).

Last edited by Aquarius_Girl; 04-25-2012 at 04:07 AM.
 
Old 04-25-2012, 04:59 AM   #2
Noway2
Senior Member
 
Registered: Jul 2007
Distribution: Gentoo
Posts: 2,125

Rep: Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781
My response may be biased because of my background in embedded systems, but it looks to me like these functions are mostly meant to provide functionality outside of the scope of the operating system, specifically for fine tuning of the hardware (adjusting voltage regulators, handling high temperature conditions, power management and sleep mode, writing to flash modules). Given the wide range and variety of hardware that the CPU is designed to work with, my guess is that this is one means of providing compatibility and compensating for variances in equipment.

Based on the fact that software calls are only one method to enter this mode, the other being a hardware interrupt, I doubt you can totally disable this mode. As far as why would you want to, I see little benefit to it as at best you may gain a few clock cycles of execution but little else. I doubt that by disabling it in software you would cause any real damange to a system as it would be far more likely for the hardware to override with respect to anything (design wise, not malfunctioing) that could cause damage.

I suspect most systems would still work if you disable this mode, but again, I anticipate little benefit from doing so.
 
Old 04-25-2012, 05:35 AM   #3
Aquarius_Girl
Senior Member
 
Registered: Dec 2008
Posts: 4,731

Original Poster
Blog Entries: 29

Rep: Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940
Thanks for replying.

Quote:
Originally Posted by Noway2 View Post
I doubt you can totally disable this mode.
So, what exactly happens when the kernel configuration option
for the same is set?

Quote:
Originally Posted by Noway2 View Post
As far as why would you want to, I see little benefit to it as at best you may gain a few clock cycles of execution but little else.
Xenomai needs those clock cycles for improving their latencies.

Quote:
Originally Posted by Noway2 View Post
I doubt that by disabling it in software you would cause any real damange to a system as it would be far more likely for the hardware to override with respect to anything (design wise, not malfunctioing) that could cause damage.

I suspect most systems would still work if you disable this mode, but again, I anticipate little benefit from doing so.
I am not too sure what to make out of this. I asked the same
question on osdev and they said that disabling SMM/I is a very bad
idea and that can cause actual fire. Does it mean that disabling it
through kernel has nothing to do with fire and all? So what does
disabling that kernel option do?

http://forum.osdev.org/viewtopic.php...=25205&start=0
 
Old 04-25-2012, 09:23 AM   #4
Noway2
Senior Member
 
Registered: Jul 2007
Distribution: Gentoo
Posts: 2,125

Rep: Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781
Quote:
Originally Posted by Anisha Kaul View Post
Thanks for replying.
Your quite welcome. I'm happy to help.

Quote:
So, what exactly happens when the kernel configuration option
for the same is set?
I don't actually know as I am not a kernel developer. As an slightly educated guess, I would say that it enables the functionality of issuing software interrupts to transfer execution to the SMM as well as read the signals provided by it. Since it has been in the processors since the 386SL (circa 1990) I am sure there is a standardized API.

Quote:
Xenomai needs those clock cycles for improving their latencies.
Again, coming from a background in Embedded (real time) systems, including the use of real time operating systems, such as Micro C-OS, and TI's DSP-BIOs, I would emphatically argue that if you are that pressed for time resources that you have absolutely chosen the wrong hardware for the job. I also believe that regardless of how much a customer or boss may try to argue the fact, that as a developer you know this to be true.

According to Jack Crenshaw, who in my opinion is truly one of the greats when it comes to real time software and one who helped put men on the moon during NASA's Apollo program:
Quote:
The style one uses to write code depends a lot on the weight one assigns to the different "-ilities." In the past I've given you my list of attributes that I value. They are, pretty much in order:
  1. Correctness (always #1)
  2. Ease of use
  3. Readability
  4. Maintainability
  5. Efficiency
You'll note that I almost always place efficiency way down on the list. Yes, it's important, but I try to never obfuscate or complicate the code just to gain efficiency.
I would like to point you to a couple of articles of his:
This one has the discussion of software priorities I quoted above and this one talks about Real time systems, including real time Linux. It is dated (about 10 years old), but what was true them is still true now.

Quote:
Originally Posted by Anisha Kaul View Post
I am not too sure what to make out of this. I asked the same
question on osdev and they said that disabling SMM/I is a very bad
idea and that can cause actual fire. Does it mean that disabling it
through kernel has nothing to do with fire and all? So what does
disabling that kernel option do?
1) Again, if you are pressed for resources that you need to rob the CPU in this manner, you have other problems. A system that is this tight for computing time is NOT going to function in any sort of stable manner.
2) System hardware should never depend upon software to prevent damage, especially if it involves risk of fire or injury to a person. If it does, then it is a failed design to begin with and I would be amazed if it were ever to pass listing by an agency such as UL or Factory Mutual. A bad design, however, may damage itself and a bad design may be such that it requires software to keep this from happening. Based upon my experience (I am an Electrical Engineer with hardware design experience) the type of scenario you would be looking at is damage to the hardware through overheating or over voltage. Such damage would likely cause heat sufficient to either cause solder reflow (and cause the part to either break contact or fall of the PCB), destroy the junctions in the IC, or both. A fault condition may blow a fuse, or even melt a wire, or even burn up part of the PCB (these last two being neither desirable or proper, but it does happen), but it should not start a fire.
 
1 members found this post helpful.
Old 04-26-2012, 04:58 AM   #5
Aquarius_Girl
Senior Member
 
Registered: Dec 2008
Posts: 4,731

Original Poster
Blog Entries: 29

Rep: Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940Reputation: 940
Thanks again for the detailed post.

Quote:
Originally Posted by Noway2 View Post
As an slightly educated guess, I would say that it enables the functionality of issuing software interrupts to transfer execution to the SMM as well as read the signals provided by it.
Wikipedia says:
Quote:
SMM is entered via the SMI (system management interrupt), which is caused by:
1. Motherboard hardware or chipset signaling via a designated pin SMI# of the processor chip.[3] This signal can be an independent event.
2. Software SMI triggered by the system software via an I/O access to a location considered special by the motherboard logic (port 0B2h is common).
3. An IO write to a location which the firmware has requested that the processor chip act on.
{So, this means that SMI is an interrupt which gets triggered
when hardware gets (for example) heated up or when a low
level software wants to perform an I/O access to a sensitive
hardware like the motherboard etc.} ?

So, setting that option [CONFIG_XENO_HW_SMI_WORKAROUND]
off, will ofcourse stop the software interrupts. Now, the question is - will
this stop the concerned hardware interrupts too?

From here: http://www.xenomai.org/index.php/FAQ...our_x86_kernel
Quote:
CONFIG_XENO_HW_SMI_WORKAROUND
This tries to disable System Management Interrupts on Intel chipsets. These interrupts run at higher priority than Xenomai core code or even the I-pipe layer. So they can cause high latencies to the real-time jobs (up to miliseconds). ATTENTION: On some systems, SMI may be involved in thermal throttling of the CPU. Thus, switching it off can cause hardware damage in overheat situations.
They seem to be saying that it can disable the hardware
interrupts too!
Is the line in bold applied to general x86/64 processors?

Quote:
Originally Posted by Noway2 View Post
Again, coming from a background in Embedded (real time) systems, including the use of real time operating systems, such as Micro C-OS, and TI's DSP-BIOs, I would emphatically argue that if you are that pressed for time resources that you have absolutely chosen the wrong hardware for the job. I also believe that regardless of how much a customer or boss may try to argue the fact, that as a developer you know this to be true.
Actually, currently I am just trying to understand exactly what causes
what, and why can't we do this and that. Writing a report needs
references and reasons. I'll look in the real time OSes you've listed.
Thanks.

Quote:
Originally Posted by Noway2 View Post
According to Jack Crenshaw, who in my opinion is truly one of the greats when it comes to real time software and one who helped put men on the moon during NASA's Apollo program:
I would like to point you to a couple of articles of his:
This one has the discussion of software priorities I quoted above and this one talks about Real time systems, including real time Linux. It is dated (about 10 years old), but what was true them is still true now.
Thanks very much. Didn't know about all those links and the person
in question. Will study the material.

Quote:
Originally Posted by Noway2 View Post
1) Again, if you are pressed for resources that you need to rob the CPU in this manner, you have other problems. A system that is this tight for computing time is NOT going to function in any sort of stable manner.
2) System hardware should never depend upon software to prevent damage, especially if it involves risk of fire or injury to a person. If it does, then it is a failed design to begin with and I would be amazed if it were ever to pass listing by an agency such as UL or Factory Mutual. A bad design, however, may damage itself and a bad design may be such that it requires software to keep this from happening. Based upon my experience (I am an Electrical Engineer with hardware design experience) the type of scenario you would be looking at is damage to the hardware through overheating or over voltage. Such damage would likely cause heat sufficient to either cause solder reflow (and cause the part to either break contact or fall of the PCB), destroy the junctions in the IC, or both. A fault condition may blow a fuse, or even melt a wire, or even burn up part of the PCB (these last two being neither desirable or proper, but it does happen), but it should not start a fire.
Again very helpful. Actually, we have a software written in C++ which
controls some unmanned systems with obstacle avoidance algorithms,
shortest path finding algos and all. So, currently I don't know which kind
of real time does my boss need. I'll give him a study of what works in which
situation and what doesn't, and will let him decide whatever he wants.

Last edited by Aquarius_Girl; 04-26-2012 at 05:01 AM.
 
Old 04-26-2012, 09:15 AM   #6
Noway2
Senior Member
 
Registered: Jul 2007
Distribution: Gentoo
Posts: 2,125

Rep: Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781Reputation: 781
Whether or not disabling the service in the kernel will disable the hardware interrupts will depend upon the hardware design. From when I last studied x86 architecture, there were four PCI interrupt lines (A,B,C,D) and that an array of interrupt vectors was multiplexed through these lines. I can't recall if any of these were NMI or non maskable, meaning that they could not be turned off. Most processors contain at least one NMI and it is frequently used for things like power fail. I would think that most PC designs would put voltage and temperature regulation (via fan speed control) in the hardware but lower cost devices may rely on the software for this function.

In terms of real time processing, virtually any "real time" system is going to have a particular component that must execute "in real time", meaning on a periodic basis and must complete before the next event. This becomes the real constraint on the design and operation of your system. As an example, I used to work in the power quality industry on 3-phase static transfer switches. The system was constrained by the rate at with the ADC would run and provide the samples of the voltages. An ADC conversion was started every 100uS and the primary control loop had to execute in less time than this. On each interrupt, it was necessary to read the ADC, calculate the phase difference between the sampled voltages and the references, filter this result, update the numerically controlled oscillator (for the references) and compute the power quality vectors corresponding to magnitude and frequency. All of the other system functions, such as handling the Modbus, the user interface, state of the breakers, etc, was handled in the extra time as available.

In most applications, you will have four main types of processes, in decreasing order of priority: hardware interrupts, software interrupts, periodic functions, and tasks, with sub-prioritization in each group. The hardware interrupts should be used to respond to physical events, both synchronous (e.g. timers) and asynchronous (power fail) and should be very tightly focused on the function. Software interrupts should be used to schedule high priority software tasks that are run start to finish on a demand basis. Periodic tasks can be scheduled on a regular basis and used to trigger thing like ADC conversions that have to occur on a regular schedule. Regular tasks then should operate as a continual (forever) loop.

In real time systems, the RTOS provides process scheduling as well as inter-process communication functions, such as semaphores, message queues, mailboxes, etc. In this regard it is very much like Linux. While I am not a kernel developer and can't comment on Linux so much, but in an RTOS the scheduling is often done preemptively, meaning that the highest priority task that can run will be. Linux, and it's parent Unix was designed as a multi user time sharing system, which is distinctly different. It is for this reason that I suggest that Linux may not be best choice for your real time, vehicle / motion control application. In addition to the ones I mentioned there are a few commercial ones available, such as Greenhills, and Nucleus. These tend to cost more, but also tend to have security certification, but are closed source. The choice is always a trade off. Another key difference is that a Real Time, embedded system, has different requirements when it comes to fault handling. In a real time system it is not acceptable to write a log entry and close the application. Instead, it often times must find a way to recover and keep on processing. It is in the handling of these types of conditions that an RTOS is going to be designed fundamentally different than a general purposes, multi-user operating system like Linux or Windows.

Similarly, I would also suggest that an x86 based platform may not be a good choice either. The modern x86 processor is really a RISC processor that emulates and supports a large instruction set making it appear as a classic CISC processor. Some examples of what I mean by this are the string handling functions being supported in Assembly, having the few registers (EAX,EBX, EXC, EDX, along with various other pointer registers), as well as having a unified memory map where code and data is contained in the same memory system (I don't mean segmentation here, I mean the same physical memory (Von Neuman architecture) that is read on the same processor bus). These processors have floating point capability and what not, but they are not oriented towards the type of operations performed in most "real time" systems, which usually involves computation of some form of differential equation system (filters, equations of motion, servo-feedback system, etc.

By way of comparison, a dedicated processor oriented towards a control system will have a completely different architecture. It will have separate, parallel memory buses (Harvard architecture) that allow the simultaneous fetching of code, constant data from rom/flash, and variable data from memory, with functions built in to multiple and accumulate. Along with that, it will have dedicated hardware for interfacing to things like quadrature position encoders, PWM inputs and outputs for motor control, etc. Additionally, the vendor often times will supply libraries of already highly optimized functions for things computing things like position state-vectors, filters, and all sorts of other items. Please take a look at this link for an example. (note, I was getting "service not available" errors and had to keep clicking a few times before it came up - they must be having trouble today). The net result is that you can achieve a much higher degree of performance in these types of applications, at a lower or at least comparable cost in terms of total hardware.

In terms of software, these days even these smaller, embedded systems can be programmed in C and even C++. You need to be careful of C++ and will need to avoid things like virtual inheritance, and templates, which are very heavy in terms of "behind the scenes" processing. When writing your algorithm for real time, you should be extremely careful about judging the performance with them written in C++ (and ultimately, I doubt you really want to use C++ for this part of the code). You will need to profile the timing of these portions of the code and determine what your "real time margins" truly are (e.g. the 100uS I mentioned in the application I worked on). Of course, Linux and C++ are excellent environments for concept development and prototyping; I am just suggesting that perhaps it isn't the best choice for your application.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Why i can't extract .smi files sanketpatel.com Linux - Newbie 2 11-03-2009 03:21 AM
LXer: Attacking SMM Memory via Intel® CPU Cache Poisoning LXer Syndicated Linux News 0 03-19-2009 12:30 PM
System_7.0.1.smi.bin Blag! Fedora 1 01-29-2007 03:18 PM
Is SMI plug programmable?? fuzzyash Programming 0 08-20-2006 03:30 AM
From ppt to smi using OpenOffice - how to automate? htm Linux - Software 0 05-27-2004 10:07 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Security

All times are GMT -5. The time now is 05:07 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration