Skip Navigation

Heartland Institute Review & Commentary: Chemicals of Concern

January 31, 2012

Executive Summary Following the passage of “Chemicals of Concern” legislation in Maine, Washington, and California, several states are considering adopting similar statutes.


Executive Summary

Following the passage of “Chemicals of Concern” legislation in Maine, Washington, and California, several states are considering adopting similar statutes. Although everyone supports reasonable measures aimed at protecting human health and the environment, these particular efforts are ill-considered and unnecessary. Chemicals of all kinds play a vital role in ensuring our health, prosperity, and safety. Identification of the risks associated with certain chemicals and the imposition of regulatory requirements to manage these risks is best managed at the federal level. The creation of a hodgepodge of state regulations with varying and vague requirements will ultimately hurt Americans far more than it will benefit them.

To the extent that action to further increase America’s sterling record of chemical safety is desired, it should come in the form of Congressional action to update the Toxic Substances Control Act (TSCA) to reflect the current state of scientific, medical, and environmental knowledge.


Those who advocate “Chemicals of Concern” legislation typically justify their position on the following basis:

  1. The chemical industry is largely unregulated, particularly in the case of new chemicals that are being created every year.                                           
  2. It is likely that some, if not many, of these chemicals represent significant threats to human health and the environment.                                                                           
  3. Even if a chemical can’t immediately be shown to represent a significant threat to human health and the environment, it is better to assume the worst until each chemical can be definitively proven safe (the Precautionary Principle).

Before examining each of these claims more closely, it’s important to have a short discussion about risk in the context of chemical exposure. It’s simplistic, and from a scientist’s point of view incorrect, to attempt to divide the world into two classes of chemicals: those that are “safe” and those that are “unsafe,” or those that are “toxic” and those that are “nontoxic.” There are doses of potentially toxic chemicals, and when the harmful dose of a particular compound is exceptionally low we may highlight that relative danger by labeling it a poison. As a way of alerting people to the magnitude of the risk posed by substances that are toxic in low doses, this method of “branding” is quite useful.

Unfortunately, the way we think about toxins and poisons today has been unduly influenced by organizations and individuals who are modern-day Luddites—generally opposed to industry,   progress, and prosperity. These organizations and individuals use the public’s natural fear of the unknown to further their own agenda. Toward that end, they have hijacked terms such as “toxic” and “poison,” twisting their meanings until they have become almost meaningless.

This is a matter of concern for a couple of reasons. First, it has the unfortunate effect of diluting the message when real, substantive risks are encountered. One cannot cry “wolf” time after time without the risk of causing a significant portion of one’s intended audience to tune out. Second, when these kinds of distortions do get traction the responses to the perceived threat are—as is the case in California and Maine—usually ill-considered, overly burdensome, and likely to cause much more harm than the benefits promised.

The principle Paracelsus established during the Renaissance still applies today: The dose makes the poison. For example, sodium and potassium are important electrolytes. A person totally deprived of either cannot live. Yet, the ingestion of too much sodium or potassium can lead to sickness or death. Similarly, one would be ill-advised to sprinkle uranium on a hamburger, but our bodies naturally contain a tiny but measurable amount of uranium. The same goes for lead, mercury, arsenic, and a variety of other elements. To give just one more example, selenium is rightly regarded as a highly toxic substance, but most organisms (including humans) have an absolute requirement for small amounts of it as an essential element for life.                                                                                                                

This is not to say we should indiscriminately use elements like lead, mercury, arsenic, and selenium with a complete disregard for the hazards associated with them. Instead, we should consider how chemicals are used, the benefits associated with using them, and the nature of the risks associated with their application. Lead, for example, has proven to be a critical component in batteries for more than a hundred years. The lead in your car’s battery presents no risk to you or your loved ones in the form in which it is used. The upstream process of lead smelting is a necessary step in ultimately producing that battery, and if not properly controlled, lead smelting could result in the emissions of a significant quantity of lead vapor, which could certainly cause harm to humans and the environment. Accordingly, we have extensive regulatory structures in place to ensure that lead smelters are properly controlled, are maintained to keep control equipment in working order, and face big penalties if they fail to do so.

With the all-important issues of relative risk and risk/benefit analysis in mind, let’s take a closer look at how chemical safety is addressed today in the United States.

Article Tags
Regulation Environment
Richard J. Trzupek is a chemist who has been employed as an environmental consultant to industry for more than 25 years.