ARTICLE

Misconceptions and Misunderstanding about LCA

Wayne Trusty, President, Athena Sustainable Materials Institute

Part Two of a Three Part Series Bookmark and Share      Download PDF

Diagram of the Life Cycle Assessment process for building products
Diagram of the Life Cycle Assessment process for building products. Graphic courtesy of Athena Sustainable Materials Institute.

In the first of this series of three articles, I provided a basic overview of what Life Cycle Assessment (LCA) is and what it entails. This second article focuses on common misconceptions and misunderstanding about LCA by addressing typical criticisms of the method. Limited understanding poses a continuing problem, not only for LCA practitioners and entities promoting LCA, but also for code developers, policy makers and other users of the results of studies.

Users are too frequently uninformed with regard to the appropriate and acceptable environmental impact measures, the sources of data, or even the relationship of tools to each other and to the ISO standards.  There is a tendency of organizations to argue for the avoidance of LCA altogether, especially in standards and code development processes where an organization perceives that the use of LCA might put it at a competitive disadvantage. This lack of understanding can undermine the value of LCA studies if those adversely affected present what are actually false arguments against the method, arguments frequently encapsulated in the phrase, “LCA is not ready for prime time.”

LCA may be an evolving method, as is virtually all science-based methodology, but it is not an emerging one as some suggest. It has been under active development on an international level for almost half a century. But the fact that it is a complex methodology creates issues and challenges for both practitioners and users and it is therefore essential that there be an ongoing education process. 

Studies of the Same Product Yield Different Results

This can certainly be the case, but one has to look at why rather than simply dismissing the method. The boundaries that are established for a study, the functional unit that is assessed, the impact measures that are reported, and even the extent to which all lifecycle stages are taken into account can be different from one study to another. In recent years, there has been a concerted international effort to ensure that such differences are either eliminated in comparative studies or the reasons clearly delineated in reports. LCA is not a method that results in a simple score; it is one that requires users to read the report and understand what has been done and why. The next few subsections look at some of the specific aspects that can lead to this kind of criticism.

Le Solaire building in New York City
Le Solaire in New York City was assessed by the U.S. National Team in the Green Building Challenge 2002 process. Life Cycle Assessment of the building was conducted using the Athena Institute's Impact Estimator software. 

The Study or Tool Doesn’t Deal with the Whole Life Cycle
It’s true. Not all studies deal with the full life cycle because for many products the manufacturer simply has no way of knowing exactly how the product will be used, maintained and treated at the end of its service life. The standards cover that, and a cradle-to-plant gate LCA is referred to as an information module, with the term LCA reserved for full cradle-to-grave studies. The standards for LCA-based Environmental Production Declarations (EPDs) also cover that, distinguishing business-to-business (B to B) EPDS that contain cradle-to-gate results from business-to-consumer (B to C) EPDs that cover cradle to grave. The standards also set out criteria so that EPDs can be aggregated and/or properly compared.  

The Study Doesn’t Show Human Health or Ecotoxicity Impacts
This criticism goes to the question of impact measures. As LCA evolved over the years, not only were different impact measures developed, but there were also different methods developed for calculating the measures. The types of impact measures basically subdivide into two main categories: mid-point and end-point. Mid-point measures can be thought of as measures of environmental loading; for example, the release of greenhouse gases. End-point measures are essentially measures of ultimate impacts on human and ecosystem health. There are also measures that are simply aggregations of Life Cycle Inventory (LCI) data; energy use for example. 

Uncertainty increases as one moves from LCI data aggregations to mid-point and then to end-point measures. Further, while there has been good scientific agreement and consistency with regard to characterization factors used to calculate mid-point impact indicators, that has not been the case with end-point impact measures. In 2005, the United Nations Environment Program (UNEP) commissioned a study to examine the issue and found a huge uncertainty range across seven different methods for calculating end-point impacts. As a result, such end-point measures as human cancerous and non-cancerous health effects and ecotoxicity have been dropped, or are in the process of being dropped, and a scientific consensus model – USEtox – is being adopted. USEtox contains only the most influential model elements and significantly reduces the uncertainty level, but is still not a recognized  impact measure in the standards and should be treated with caution.   

LCA doesn’t Include Social Effects or Land Use
This criticism goes to the heart of the issue that one method or tool cannot and should not be expected to do it all. No tool should be criticized for not doing things it was never intended to do, and we have to think instead in terms of a toolkit stocked with complimentary tools.   For example, work is under way to develop a social impact version of LCA, but it will undoubtedly be in a separate category complimentary to environmental LCA. Land use effects and issues such as biodiversity related to resource extraction are unquestionably important. However, they are very site specific and not readily handled at the level of product groups, or even at the level of one company if it has multiple extraction sites in different regions. This is much better handled through complimentary tools such as resource extraction certification systems.

Similar comments can be made about risk analysis related to toxic inputs and outputs in a production process. There are better methods for tracking and assessing the risks of such flows.  

Different Tools Give Different Answers
This is another example of where the user has to be informed and understand for what a tool is intended. For example, there are two prominent LCA tools in use in the United States that are intended for different purposes. Both are aimed at the building community as opposed to LCA practitioners. One deals with complete building assemblies and the other with individual products; one covers maintenance and replacement over a 60-year-assumed service life and the other doesn't. The reality is that they are complimentary tools intended to serve different functions in the decision process.  

Similarly, there are different tools in the market intended for use by LCA practitioners. They come with data that the user can change, and it is essential that the user be trained in their use. 

There is No Consistent, Readily Available Data

The Life Cycle Inventory (LCI) is at the heart of any LCA analysis, and how well the data represents reality strongly influences the value of LCA results. However, data collection is also the most time consuming and costly part of the process. As the LCA method developed over the years, the absence of national comprehensive databases led to using whatever data was easily available; this in turn led to inconsistencies, which further supported arguments put forward by those opposed to LCA. 

Now, however, industry associations and their members are increasingly recognizing the importance of making data public by submitting it to national databases – the U.S. LCI Database for example.  By doing so, industry groups or associations are dealing with the problem noted above, of tool developers or practitioners using whatever was conveniently available.  There is no doubt that the absence of national LCI databases, especially in North America, has been a major issue and challenge in terms of the comparability and overall quality of LCAs.  Fortunately, that issue is being resolved through national and international database development programs. 

Other data issues come to the fore and have to be understood by users.  For example:

  • National and regional specificity of data has to be considered and explicitly recognized, particularly when we think of adopting databases from other countries, and even in terms of the applicability of data from one region to another.

  • Industry representativeness versus brand-specificity is another important data difference. At an industry level, data should be balanced and representative of the industry.  The same is true of brand-specific data if there is more than one plant.  But generic and brand specific are two different categories of data and have to be recognized as such.

The Bottom Line
The bottom line in terms of much of the criticism of LCA is a failure to accept a fundamental point. If entities don't follow the standards, or “cheat” in some way, that is a policing problem, not a reason to throw out a methodology.   The situation is no different for LCA than for other well-accepted methods such as energy simulation. 

Wayne Trusty is President of the ATHENA Sustainable Materials Institute and its U.S. affiliate, Athena Institute International. He is an Adjunct Associate Professor on the University of Calgary’s Faculty of Environmental Design, a member of the board of the Green Building Initiative, Chair of the Technical Committee established in the U.S. to take the Green Globes rating system through a full American National Standards Institute process, and chair of the ASTM working group to establish a standard guide for whole building LCA.

As always, your articles ideas and submissions are welcome. Send them to foliver@iccsafe.org along with a daytime phone number at which to contact you with questions.