This discussion is locked.
You cannot post a reply to this discussion. If you have a question start a new discussion

Is there any standard method for flowing down EMC/EMI requirements to internal modules

EMC/EMI standards (conducted/radiated emission/susceptibility) are well established for system boxes. However there appears to be no standards, nor methods for deriving a specification for internal modules that are inside the RFI filter protected Faraday cage.


I am looking at frequencies below lambda/100 relative to the module size. I am particularly concerned with sensors that are noise floor limited where it is difficult to detect additional injected noise, or noise that is exported to other modules within the enclosure.


Being below lambda/100 the coupling will be essentially near field i.e. capacitive, and occasionally inductive, and the main threat will be the chassis itself because of the system box's RFI filters (where the Y capacitors induce half supply noise on the chassis relative to the '0V' supply return).


Possible modules include pluggable PCB's, Camera and Image Intensifier modules (which are near photon noise limited and will alias and mix any injected noises).


I'm looking for documented/public methods and test procedures that can be used as formal references for a Statement of Work and for sub-Module specifications.


      Philip Oakley


I've also asked this (just now) at https://electronics.stackexchange.com/questions/313725/a-standard-method-for-flowing-down-emc-emi-requirements-to-internal-modules


[Extra discussion]


In terms of a possible methods I have considered that the external EMC (conducted/radiated) levels should be reverse engineered across the RFI filter to determine the internal levels of power line noise, and from that the chassis injected noise level relative to the internal system 0V. Note that the power line noise computed here is not the module power line because typically there will be a system internal PSU between the two.


The chassis level signals can now be identified as the injectable test signals. These can be offered in a number of ways, such as random white/pink noise, or spot sine wave sweeps, or phase aligned edge spikes (inverse FFT of the amplitude spectrum implies step rising edge). In many systems the sine and spike test should be near synchronised with system operation/sampling frequency such that they beat together (just like American NTSC TV hum bars do). Ideally the 'hum' should be at about 2 Hz for visual effects(scroll bars, flashing, signals displayed on oscilloscopes), or an nice audio tone if the effect is heard.


It is important not to use the random noise when it's injected level is meant to be below noise floor. It's OK if the injected effect will be the noise floor, but expensive sensors usually want the noise floor to be limited else where (such as the laws of physics, thermal and blackbody noise). The RSS (root sum square) effect means that added noise of upto 50% is not noticed at all.


The test itself should surround the EUT with a test chassis, with say a 5mm clearance (or your local clearances), and be driven by a low impedance galvanically isolated test signal (i.e. step-down transformer coupled). (Remember the Wiring Regulations have had Earth as a source of dangerous potential for many years ;-)


In some case the problems may be "inductive" but often these are really a current loop that has been closed by the stray capacitive coupling, and the solution is applying inductors around the signal cable (think VDU cables with their cylindrical ferrite filters), but the problem is in the stray capacitance.


Unfortunately, non of this (method/approach) is available as a documented standard I can call up, hence the request.