4 min read

Novelty in Design Evaluation Metrics

Novelty in Design Evaluation Metrics

Novelty in Design Evaluation Metrics
   When considering an engineering design concept, novel solutions should be considered more favorably than existing ones. Constraining design concepts strictly to existing solutions results in a product that offers little in the way of innovation to the market. Without innovation, products are less competitive and may be subject to rapid depreciation relative to competitors.

Regarding our proposed experiment on critical examination during brainstorming, the experiment can be readily adapted to incorporate this metric. Brainstorming activities generate a large number of concepts or solutions in a short period. Each idea will be graded with regard to being contrary to the typical solution presented for that particular problem.

In a hypothetical brainstorming session centered around a proposed design for a tool fixture (for example), atypical responses may be more heavily favored due to their novel approach to solving the problem. Responses that utilize unique features such as magnetics, gyroscopes, or photonics may provide a unique solution to a problem that could otherwise go unsolved. These responses should be prioritized to ensure they receive due consideration.

Shah writes that the scoring of novel designs should be done using a weighted summation system [1]. In the example used in Shah's study, specific attributes related to the propulsion of a vehicle were prioritized relative to other features. The medium the vehicle was designed to operate in was given a higher priority relative to the motive force behind moving the vehicle through that medium. The result is that the most prized features to develop novel solutions are prioritized over less relevant features [1].

In the Shah study, the part count received the lowest novelty score [1]. This is to be expected as the propulsion method has a greater chance of lowering the product cost than a higher part count. The opposite is demonstrably true. Shah's study placed the highest prioritization on thrust and medium of transport. This shows a strong preference towards non-ground-based vehicles, as any watercraft or aircraft would require a non-standard propulsion technique.

In terms of scoring novelty in the proposed experiment, each solution provided by the control group should be categorized based on common attributes. The experiment staff could then score these attribute categories based on desired outcomes for the product. Next, the control group's responses could be scored for novelty. Finally, the test group's answers could be scored for originality and the results compared to prove or disprove the hypothesis.

Variety in Design Evaluation Metrics
   Variety is another metric by which the proposed experiment's success may be measured. With improved variety comes a more significant amount of divergent ideas in the solution space. This allows for a greater chance of success in achieving an ideal solution from a collaborative design effort. Shah defines variety as the difference between a given set of ideas and the total summation of all ideas [1]. Concepts that utilize different physical principles have a large variety between them as opposed to concepts that differ only in quantifiable measurements of physical properties.

Further elaboration of variety may be accomplished through functional design trees. As a proposed system diverges from the norm, the embodiment details of that system will become more specialized. The visual manifestation can be seen through an increased branch density from a top-down functional diagram. These genealogy diagrams may be non-subjectively scored based on weighting higher-level branches higher than lower-level branches. Variety may be determined based on the divergence from the norm at each branch level.

For the purposes of the proposed experiment, variety may be measured using the Shah approach, as discussed. Brainstorming will be performed by the control and test groups, with the results recorded and scored. The experiment staff will accomplish weighting and scoring.
   Scoring for the experiment will be accomplished using Shah's scoring system. The scoring system used by Shah for calculating the variety of metrics uses a weighted summation of all functions being evaluated. Each branch receives a weighted score based on its position in the genealogical tree. For each idea under consideration, the scores of all branches are added together.

Quality in Design Evaluation Metrics
   Quality is a difficult-to-measure characteristic of an idea. As some quality measures are subjective to the scorer, a better method must exist to ensure an objective score is provided. As early design decisions are the strongest drivers of future-state design, high-quality ideas must be scored appropriately.

Using Shah's evaluation method, quality metrics in design utilize a scored binary system [1]. Questions are posed about designs, and points are awarded, not awarded, or subtracted based on meeting the assigned criteria. For example, Design for Manufacturing calls for length restrictions on machined parts based on the radius of the initial bar stock. If these criteria are exceeded, the design could be awarded a point. The design could receive no points if the criteria are met but lack a safety factor. If the design criteria are not met, the design could lose a point. In this way, subjective bias is removed from quality measurements.

The quality of proposed solutions may also be evaluated using the Quality Function Deployment House of Quality system [2]. This system uses a matrix to score engineering requirements based on matching marketing-driven customer solutions. By ensuring that engineering solutions are closely aligned with customer requirements, the relative value of each proposed solution may be judged according to its individual ranking.

For the purposes of the proposed experiment, quality rankings may be accomplished using a combination of the House of Quality system and the binary scoring system proposed by Shah. The control and test groups will be scored using these two systems. Scoring will be accomplished by experiment staff.

Quantity in Design Evaluation Metrics
   In brainstorming, it is desirable to create a large number of potential solutions in a short amount of time. This allows a greater chance of an ideal solution being presented during the brainstorming session. The proposed experiment will cause debate during brainstorming sessions which risks the quantity metric due to the reluctance of individuals to submit ideas for fear of potential ridicule. As a result, the quantity metric for this experiment may be one of the most telling metrics overall.
   Quantification of design results may be easily accomplished by counting the number of ideas proposed. Ideas may be calculated on a per-person basis and per-session basis to baseline the control group results against the test group results. This will allow for ready identification of the debate's impact on the number of ideas posed during the brainstorming session.

References:
   [1] Shah, J. J., Smith, S. M., and Vargas-Hernandez, N., 2003, "Metrics for measuring ideation effectiveness," Design Studies, 24(2) pp. 111-134.
   [2] Pahl, G., Beitz, W., Feldhusen, J., Grote, K., 2007, Engineering Design: A Systematic Approach, 3rd ed., Springer Science & Business Media, London.