One of the RCEM tools described in the last blog was the Design Failure Mode Effects Analysis (DFMEA) methodology.
DFMEA is an analysis tool utilized to explore and document ways that a product might fail in real-world applications / use.
The development of a DFMEA is predominantly a team effort and, contrary to popular belief, it’s not simply an engineering responsibility.
DFMEAs document:
- the key functions of a product,
- the potential failure modes relative to each function
- and the causes of each failure mode.
The DFMEA methodology allows one to document what is known and contemplate a product's potential failure modes prior to completing any design work. The information is then utilized to mitigate risks, design out failure modes and increase product reliability.
DFMEAs are (ideally) conducted at the earliest stages of concept development and are used as an iterative tool to select between competing designs or concepts.
Though DFMEAs are specific to design, PFMEAs (process) and MFMEA (manufacturing) help identify/mitigate processing and assembly risks. Having operations and supplier management staff join in on the DFMEA development is often well received and adds significant value.
Conducting a DFMEA
Review the design
- Drawings / schematics of the design/product or prototype/mock up
- Identify each component
- Identify each interface
Brainstorm potential failure modes
- Review existing documentation
- Pull failure data from previous generation products
- Compare failure modes from similar products
- Use customer complaints, warranty reports, and reports that identify things that have gone wrong, such as hold tag reports, scrap, damage, and rework, as inputs for the brainstorming activity
List the potential effects of failure
- What happens is the component / interface results in a failure
- There may be more than one for each failure; product can not function, limited functionality, appearance issue (works fine), etc
Assign a Severity rankings
- The severity ranking is based on a relative scale ranging from 1 to 10.
- A “10” means the effect has a dangerously high severity leading to a hazard without warning.
- A severity ranking of “1” means the severity is extremely low.
- This is a relative scale, not an absolute.
- Assigning severity rankings is critical. Severity rankings are the basis for determining risk of not only a potential failure mode but the interaction of different failure modes.
- Note: once you have established the Severity Ranking System it should be used for every product throughout the organization – it should be done once; as a company wide exercise so that all departments and projects are ranked consistently (and can be compared).
Assign an Occurrence rankings
- Like the Severity Ranking the Occurrence Ranking is a relative 1 to 10 scale, based on how frequently the cause of the failure is likely to occur.
- An occurrence ranking of “10” means the failure mode occurrence is very high; it happens all of the time. Conversely, a “1” means the probability of occurrence is remote.
- Occurrence Rankings can also be developed with three different ranking options (time-based, event-based, and piece-based) and select the option that applies to the design or product.
Assign Detection rankings
- Based on the chances the failure will be detected prior to the customer finding it
- Think of the detection ranking as an evaluation of the ability of the design controls to prevent or detect the mechanism of failure.
- To assign detection rankings, consider the design or product-related controls already in place for each failure mode and then assign a detection ranking to each control.
- A detection ranking of “1” means the chance of detecting a failure is almost certain.
- “10” means the detection of a failure or mechanism of failure is absolutely uncertain.
Calculate the RPN for each issue
- Severity x Occurrence x Detection.
- The RPN gives us a relative risk ranking. The higher the RPN, the higher the potential risk.
Develop an action plan
- Define who will do what by when.
No comments:
Post a Comment