Monday, June 30, 2014

Poverty and Promise in Our Own Backyard

The Rocky Mountain Institute recently released an article about rural electrification on American Indian land that was picked up by Cleantechnica [1]. It begins by quoting some rather unfortunate statistics: "almost 40 percent of the people live without electricity, over 90 percent live below the poverty line, and the unemployment rate exceeds 80 percent." It provides just a snapshot of the hardships American Indians are put through. By being "relocated" to remote lands with few resources, opportunities for American Indians to improve their livelihood are scant and often inadequate [2]. Being a white, middle-class New Englander I have very little at stake advocating for American Indian rights, but I do feel passionately that the poverty endured by American Indians on reservation land is one of the largest injustices in American history. A strong statement for a blog that typically refrains from such pointed language, but it's a serious matter that deserves a serious tone.

It's perhaps a bit of poetic justice that some of the reservation land that American Indians call home has some of the most promising wind and solar potential in the country. I've collected some resource potential maps from NREL to be displayed alongside a Bureau of Indian Affairs map of reservation locations:

For wind resources, South Dakota and parts of Montana have the most potential, and for solar PV, Arizona and Southern California have a lot of promise. The BIA has already identified the benefits of wind development on American Indian land and has published a report highlighting a few reservations with high wind potential [3]. Already a few large wind projects have been developed. I haven't come across much by way of solar development, which I think is a bit of an overlooked opportunity; the Navajo Nation for instance resides in an area with one of the highest solar insolation levels in the country, yet 40% of homes don't have access to electricity [4].

American Indians developing their renewable energy resources to their highest potential is a definite win-win scenario: availability of in-demand jobs, meeting of climate goals, access to electricity, improvement of relationships. It's one of the most clear-cut examples of a profitable triple bottom line enterprises I could imagine.

--------------------------------------------------------------------------------------------------------------------------
[1] http://blog.rmi.org/blog_2014_06_24_native_energy_rural_electrification_on_tribal_lands
[2] http://www.spotlightonpoverty.org/ExclusiveCommentary.aspx?id=0fe5c04e-fdbf-4718-980c-0373ba823da7
[3] http://www.bia.gov/cs/groups/xieed/documents/text/idc013229.pdf
[4] http://ewb-gt.org/navajo-nation

Thursday, June 26, 2014

Sun Day Sunday

According to some blogger with a solar observatory, last Sunday June 22nd (see how I like to keep things timely?) was International SUNday, a day devoted to appreciating our life-giving sun through observation (with proper equipment of course) and education [1] [2]. I came across the holiday after a friend had posted it on Facebook with a list of sun-related facts. I had intended to include the bit of trivia about how how the sun could provide some so-many-thousand times more energy than we consume each year, but I couldn't remember it and looked up the numbers behind it. I instead got distracted and decided that perhaps I should follow up with a blog post about it, which you're currently being subjected to.

Factoring in panel efficiency (~20%) and land cover (~30%), the energy that could be captured from the sun is 420-times our total annual energy consumption [3]. This means that covering 1/420th of land area (0.2%) in panels would be able to meet all of our transportation- and stationary-energy needs. This comes to 110,000 sq. miles, which is about the area of Nevada [4]. This seems like a lot, and it is, mostly because we use A LOT of energy. It seems a bit hopeless to try and tile all of Nevada in panels, but that doesn't take into account is that we humans have gotten really good at building things over large areas already, and not just simple things: cities. I looked up the population density of the largest cities and found that they're in the 10,000 people/sq. mile ballpark [5] [6]. We know half the world's population lives in cities now, so that makes 335,000 sq. miles, or 0.6% of land area.
1) I actually think it's pretty amazing that in 200 years of modernization and urbanization, we've only covered 0.6% of land area; the world is BIG. That said, even having only covered that 0.6%, we've managed to screw up a lot of natural processes. Humans are MESSY.
2) This means that if we cover 1/3 of city area in panels, we'd be able to meet all our energy needs via solar power. Based on my years of playing SimCity, that's roughly what roads typically take up (NOT an endorsement for "solar freakin' roadways," we know that's a silly idea; just serving as a comparison).

Now, real talk:
This is a quick analysis that is based on averages. Solar power isn't available everywhere all the time and there are locations where it wouldn't make a lot of sense. It was meant to provide context: yes we can build enough panels to cover that area because we've built more of more complicated things, but no it isn't going to be easy. Here's where it gets interesting though. Urbanization is a very strong force; most of the growth in population is going to be in newly urbanized areas in developing regions of China, India, and Africa [7]. In other words, the way we as a civilization will grow in the next 30 years is going to be by building new densely populated areas, not by making current population centers denser. 2/3rds of the new people expected by 2030 will live in buildings that don't currently exist yet in regions that are characterized by high solar insolation [8]. This is a HUGE opportunity. Those new urban areas need to be extremely efficient and reliant on local renewable energy, primarily solar.

Want to change the world? Become a contractor specializing in building low-cost, low-energy apartments in developing countries, or a local policy expert pushing for low-energy new building codes. That will mean the difference between a future of much of the same (that is to say, getting worse), and a future with an inflection point.

--------------------------------------------------------------------------------------------------------------------------
[1]http://www.slate.com/blogs/bad_astronomy/2014/06/22/international_sunday_celebrating_the_nearest_star.html
[2]http://solarastronomy.org/sunday.html
[3]http://en.wikipedia.org/wiki/Solar_energy
[4]http://en.wikipedia.org/wiki/Land
[5]http://www.wolframalpha.com/input/?i=population+density+tokyo%2C+mexico+city%2C+new+york%2C+boston%2C+london
[6]http://www.citypopulation.de/world/Agglomerations.html
[7]http://www.scientificamerican.com/article/cities-may-triple-in-size-by-2030/
[8]http://solargis.info/doc/_pics/freemaps/1000px/ghi/SolarGIS-Solar-map-World-map-en.png

Friday, June 13, 2014

"Where's my F*cking Electric Car?!"

I've been maintaining a pretty good clip on this blog of about new 3 posts a week ("not bad," thought the novice blogger to himself)...until two weeks ago. Since then I've been very wrapped up with work and had to focus on that whole science thing. The problems we're working on are very nuanced, but not unknown in the field of electrochemistry, which itself is a relatively new field. Basically, electrochemistry is hard, and few people outside of the science understand just how hard it is (hence the title).

Electrochemistry emerged as a separate field of chemistry after early scientists first started laying the groundwork for general electromagnetic theory and chemistry. Elements were discovered, conservation of mass and matter were accepted, electrostatic generators were built, and electrical detectors were invented all before scientists even started tinkering with electrochemistry. And the first steps were pretty gruesome. In 1800, an Italian doctor (Galvani) dissecting frogs was able to make dead muscles twitch by touching them with different metals connected to each other in series. A physics professor (Volta) disagreed on the mechanism and arranged stacks of different metals and brine-soaked paper to achieve similar results. This was the invention of the battery: the first device that turned chemical energy into electricity, but no one at the time knew how it worked. That didn't stop anyone from using it though; application outpacing understanding in the energy field has been the MO since the battery was first invented.

Electrochemistry got its first big scientific break from Michael Faraday in 1830s linking current (amount of electricity) and the amount of matter deposited during electroplating experiments. It took another 90 years and the framing of modern thermodynamics by Willard Gibbs before the groundwork of analytical electrochemistry was laid by Hermann Nernst relating voltage to chemical equilibrium. So only in the last decade of 1800 are we even able to discuss the describe the designed properties of a battery in simple terms of voltage and current.

So...electrochemistry took a while to get to a point where we can actually analyze it. So what? Shouldn't it have erupted in discovery after discovery since then? Not really. Most of the batteries we use today were invented long before anyone really understood what was going on. Even the modern lithium ion battery, invented in the 1980's, features a component called the "solid-electrolyte interface" (SEI) that sets the longevity and safety of lithium ion batteries, however scientists have only recently began to understand the structure and composition of it. In other words, the microns-thin layer that determines how long you can use a battery and whether or not it will burst into flames is the least understood part. There's almost too much that goes into the design of a practical battery not to take a trial-and-error approach. There's:

  • The electrical potential of the positive and negative electrodes (sets the cell voltage)
  • The chemical kinetics of the reactions at the positive and negative electrodes (helps determine maximum current)
  • The structure of the electrodes (determined by the conductivity of the reactant species and the speed of the kinetics)
  • The electrolyte composition and properties. This can be further broken down into:
    • Operating pH (determines chemical compatibility with battery housing, electrodes; also determines prevalence of undesired side reactions)
    • Conductivity (helps determine maximum current)
    • Organic vs. Aqueous (determined by cell voltage, cost considerations, and storage reactions)
  • The membrane separating the positive and negative sides of the battery (critical component, helps determine a lot of things)
    • Cell durability
    • Cell efficiency
    • Maximum current draw
    • Cost and manufacturability
    • Operating temperature regimes
  • Cell housing and architecture
    • Sealed vs. Flow (flow batteries limited to liquid energy storage reactions)
    • Bipolar vs. Monopolar (tradeoffs on manufacturability)
This is by no means an exhaustive list. To give an idea of how all these parameters fit together, I'll walk through an example in another post where we'll go from chemical fundamentals to full cell operation. During that exercise, it'll become pretty clear that we're lucky to have even what we have now given how easily things can go wrong.  

Thursday, May 29, 2014

The Future of Transportation

"If I had asked what people wanted, they would have said faster horses." 
-Henry Ford
I've been wanting to write about this topic for quite some time, and given the recent buzz from Google and others, now seems to be a good time. The topic is self-driving, electric vehicle car-sharing, and I believe it has an amazing potential to change how we view personal transportation.

I've mentioned before in this blog the interplay between efficiency and economics of electric vehicles. Electric vehicles may be about $10K more expensive, but they are 1/6th the cost to operate. I had said that in the case of high gas price, EVs become much more economically attractive. They also become much more attractive if the mileage on them increases; the farther you drive, the more you leverage lower operating costs to your advantage. The problem is Americans don't drive enough. Well...I mean, Americans drive plenty, but not enough to currently offset the spread between our extremely cheap gasoline and electricity. In order to make the spread work more in our favor, we have to increase the number of miles traveled/vehicle; that means car sharing. Car sharing is a great idea. We are terrible at making good use of our own cars; we drive them just 30 miles/day on average, and we're in them only for an hour a day, leaving them sitting unused for the remaining 23 hours [1] (as an aside, the National Household Travel Survey from the US Department of Transportation is a pretty interesting report; you ought to take a look). Car sharing drives up vehicle utility.

To look at how electric vehicles could impact a traditional car sharing company, I looked up one of Zipcar's last shareholder annual reports before they were bought by Avis [2]. In addition to having to look up the difference between "revenue" and "income," I also had to look up some assumptions for cost of vehicle ownership [3], fleet mpg [4], gas prices [5], and some of Zipcar's historical data from Wikipedia [6]. Here's the breakdown from 2012:

  • 770,000 members, 10,000 vehicles
  • $280 million in revenue, $15 million in profit ($10 million from some "tax thing")
  • From revenues:
    • $40 million from membership fees
    • $240 million from usage payments (I simply assumed the remaining revenue)
  • From estimates on costs:
    • $100 million for gas
    • $24 million for vehicle acquisition in 2012
    • $24 million for parking space costs
    • $12 million for insurance
    • $5 million for maintenance
It was here I realized I was $115 million short on costs. I also failed to remember Zipcar is a company that needs to do things like "pay employees" and "run marketing efforts," so I included those [7].

  • Additional estimates for costs:
    • $70 million for payroll
    • $20 million for customer acquisition
  • Sum total of costs: $255 million
Not bad for an estimate of an entire company's cost and revenue structure; we're less than 10% off. If we were to wave the vehicle electrification wand, it would primarily impact gas costs (the largest fraction of costs) and vehicle acquisition. To that end, gas costs would now become electricity costs at $23 million based on similar vehicle class efficiency figures (I used the Nissan Leaf) [8] and 2012 electricity prices [9]. Vehicle acquisition costs would increase to $36 million. In this scenario, revenues are still $280 million, but costs are now $190 million, raising profits from basically break even to $65 million; a healthy 23% profit margin.

That was a straight swap of EVs for gas cars in an existing car sharing company. What about invoking the self-driving component? Well, that one is a little tricky. The closest analogy is Uber, and it took some research to figure out how they actually operate, as well as a leaked 5-week snapshot to get an idea of finances [10]. Uber works as the marketplace for taxis and black car services; they actually don't own any vehicles themselves. As such, I don't have a feel for operating expenses like I did with Zipcar. All I know is that from the fare the customer pays, Uber (primarily a software and marketing company) takes 20%, leaving the 80% and the rest of the vehicle ownership, maintenance, insurance, payroll, etc. to the taxi or black car company. Rather disappointingly opaque, but most Silicon Valley startups are like this.

Let's consider hypothetically starting our own company using shared self-driving EVs. In honor of rolling out fleets of Google self-driving cars, let's call it "Gaggle" (a terrible name; no one ever use this name). Gaggle's usage statistics are similar to Zipcar's: there are 77 members/vehicle, 6hrs of use/day on each vehicle, 180 miles driven/day in each vehicle. If we use the NHTS data, this means there are 6 trips in each vehicle/day. For simplicity, Gaggle charges on a "per trip" basis; think of it like a flat fee, or like a bus pass. Let's consider 2 scenarios corresponding to 2 prices: a $12/trip scenario and a $6/trip scenario. And let's do a simple payback period analysis where trip revenue pays for the initial purchase of the car. Consider a $20,000 base car, $10,000 for self-driving capabilities [11], and $10,000 for a battery pack. At $12/trip, the gas and electric self-driving vehicles don't look that much different. Gaggle's electric self-driving vehicle pays for itself in 1.6 years and racks up 109,000 on the odometer, while the gas self-driving vehicle pays for itself in 1.8 years and hits 117,000 miles. The $12/trip cost is roughly half that of Uber and about equal to Zipcar's economics. What about the $6/trip case? The electric vehicle pays for itself in a longer time, 3.6 years and climbs up to 240,000 miles: high but given EVs lower maintenance requirements, definitely achievable (would be about 2500 battery charge/discharge cycles). The gas vehicle on the other hand requires 8.2 years and needs 540,000 miles to pay it back; this is definitely a vehicle replacement, further impacting economics.

Due to the lower operating cost of self-driving electric vehicles, Gaggle can offer cheap, convenient private transportation that makes a strong business case for profitability. Gaggle can reduce traffic, reduce pollution, and reduce frustration, all for about the cost of grabbing lunch out. Car sharing has never been better.

World: meet Gaggle.

----------------------------------------------------------------------------------------------------------------------------------
[1] http://nhts.ornl.gov/2009/pub/stt.pdf
[2] http://www.zipcar.com/press/releases/zipcar-reports-fourth-quarter-and-full-2012-results
[3] http://org.elon.edu/sustainability/documents/Zipcar%20FAQs.pdf
[4] http://www.nhtsa.gov/staticfiles/rulemaking/pdf/cafe/April_2013_Summary_Report.pdf
[5] http://www.eia.gov/dnav/pet/pet_pri_gnd_dcus_nus_a.htm
[6] http://en.wikipedia.org/wiki/Zipcar
[7] http://www.marketingsherpa.com/data/members/handbooks/2012-Lead-Generation-Benchmark-Report-EXCERPT-5-23-12.pdf
[8] http://www.fueleconomy.gov/feg/Find.do?action=sbs&id=30979
[9] http://www.eia.gov/electricity/monthly/pdf/chap5.pdf
[10] http://valleywag.gawker.com/matt-durham-an-analyst-at-an-ecommerce-company-crunche-1476549437
[11] http://www.fastcompany.com/3025722/will-you-ever-be-able-to-afford-a-self-driving-car

Wednesday, May 28, 2014

Shale Gas: Fugitive Methane Emissions (3 of 5)

An article recently cropped about natural gas methane emissions and it prompted me to pick up where I left off about shale gas. For me, this was more of a question about widespread use of natural gas than shale gas in particular, but research revealed shale has some unique attributes that merit particular attention.

The whole crux of the matter with methane emissions from natural gas wells and associated infrastructure is, "are methane emissions currently high enough to offset the gains in efficiency from burning natural gas?" Natural gas burns more efficiently in boilers [1] and power plants [2], however vented methane from fracking operations and infrastructure leaks has a very high radiative forcing number (86 and 34 at the 20- and 100-yr timeframes respectively) [3]. So, which one wins? Efficiency (less fuel burned means less emissions)? Or leaks (fugitive emissions impact the climate more)? To answer that question, we can take a look at how much more efficient natural gas is over other sources of fuel, the respective greenhouse gas impacts, set a maximum natural gas emission threshold, and then see what the actual leak rate is to determine whether we're over or under.

From the EIA, electric power (33%) and industrial uses (31%) are the largest consumers of natural gas [4], but curiously most of the natural gas in industry isn't in plant and process heating; it's as feedstock (65%) and other non-heat-and-power uses [5]. From that perspective, we'd be fine just looking at the power plant sector. Natural gas is 50% more efficient than coal in power plants [6], and has half as much CO2 per unit energy burned [7], so solving for a maximum leak rate, it ends up being no more than 3.3% for the 20-year outlook and 12.5% over the 100-year outlook. Based on an Environmental Defense Fund/Princeton analysis on similar benefit scenarios, our analysis is looking pretty good [8].

So how are we doing now relative to the actual methane leak rate? Well...depends who you ask. A number of papers have been published to look at exactly this problem. A group of professors (curiously from the evolutionary biology department) pegged fugitive emissions from 3.6-7.9% using a 2010 EPA report [9]. They were panned by another group of Cornell professors, this time from a chemical/biological engineering department and earth/atmospheric sciences department [10], reporting EPA 2011 stating 2.2%. One multi-university team recently measured methane emission directly [11] and calculated 0.42%. I did my own analysis based on EPA's 2014 GHG report and found 1.0% [12] [13].

Basically, this is still early science and a precise number has yet to be nailed down, but by the sounds of it, it doesn't appear to be above the 3.3% threshold for the 20-year timeframe. So that means that we don't seem to be doing any worse for the climate from a greenhouse gas perspective by exploiting natural gas, but that also means we aren't really doing any better, and if you believe anything climate models tell us (and really, you should, because they're correct), we need to be doing better fast. Right now natural gas is 85% of coal greenhouse gas emissions if you take the EPA's 2011 value; it could be 55% if we tightened up our natural gas infrastructure. And it's not like we're doing it just for the sake of the climate (though that's reason enough); that's lost revenue. $2.2 billion/year is leaking out from poorly designed and maintained infrastructure and processes, and that's only set to increase if natural gas development expands. If that isn't a business opportunity, I don't know what is.

Natural gas currently isn't doing us any real favors right now, but it does hold a lot of promise for significantly lowering the carbon footprint of our electrical grid in a very short timeframe. We need to survey the industry for best practices and standardize them, or help bring to market solutions that would capture that lost value. We also need to seriously consider how to best implement natural gas into our energy portfolio to reduce climate risk exposure. It's not a silver bullet (nothing ever is), but at least it's another bullet in the chamber. I'll be discussing my thoughts on economic and environmental strategies in a final post about shale gas soon.

----------------------------------------------------------------------------------------------------------------------------------
[1] www.eia.gov/neic/experts/heatcalc.xls
[2] http://www.eia.gov/tools/faqs/faq.cfm?id=107&t=3
[3] http://www.climatechange2013.org/images/report/WG1AR5_Chapter08_FINAL.pdf
[4] http://www.eia.gov/dnav/ng/ng_cons_sum_dcu_nus_a.htm
[5] http://www.eia.gov/totalenergy/data/monthly/pdf/sec4_5.pdf
[6] http://www.eia.gov/tools/faqs/faq.cfm?id=667&t=8
[7] http://www.eia.gov/electricity/annual/html/epa_a_03.html
[8] http://www.pnas.org/content/109/17/6435.full#F2
[9] http://download.springer.com/static/pdf/5/art%253A10.1007%252Fs10584-011-0061-5.pdf?auth66=1401495762_5479423aee71642f65bd374b10555269&ext=.pdf
[10] http://link.springer.com/article/10.1007/s10584-011-0333-0/fulltext.html
[11] http://www.pnas.org/content/early/2013/09/10/1304880110.full.pdf
[12] http://www.epa.gov/climatechange/Downloads/ghgemissions/US-GHG-Inventory-2014-Chapter-Executive-Summary.pdf
[13] http://www.eia.gov/dnav/ng/ng_cons_sum_dcu_nus_a.htm

Wednesday, May 21, 2014

If It Sounds Too Good To Be True...

My job is working for a battery startup as an electrochemical engineer. My day-to-day is spent being elbow-deep in the science of batteries. It's difficult work (really difficult; I'll write about it one day), and I'm always learning something new. From time-to-time, a friend or I will come across an article about a new battery chemistry. I've come to learn through many examples that if it sounds boring (silicon anodes, or quinone electrolytes), it's usually big news, but if it sounds exciting (a battery that runs on air! an edible battery!), it's usually not great.

The Power Japan Plus battery is one of those not-so-great batteries, and here's why:

Lithium ion batteries work by shuttling ions between two "intercalation materials." The lithium rests in the crystal structure of a lithium oxide at discharge, then upon charging, shuttles that lithium from the lithium oxide, through the lithium salt electrolyte, and inserts it into the open spaces of a graphite anode, thereby storing energy. The "Ryden battery" is a similar lithium intercalation battery, except that the source of the lithium isn't a lithium oxide, it's the lithium salt electrolyte; there is no solid state source for lithium ions [1]. As such, the capacity of the battery is limited by how much lithium you have in the electrolyte, which is a function of volume and concentration. You don't want to have a large volume of electrolyte because that means it's a greater distance the lithium ion has to travel, increasing ionic resistance, and driving down cell efficiency. Therefore we have identified one limit to cell capacity.

Even if you manage to have an exceptionally high concentration of lithium (you can't, btw; it's stuck around 1M for safety considerations [2]), you're still limited by the specific capacity of graphite, which is the bottleneck in current lithium ion chemistries at around 100-150mAhr/g. Power Japan Plus gives us a diagram showing cell voltage as a function of capacity and the battery stops charging at 140mAhr/g; just what we'd expect. Here is another limit to cell capacity (granted a limit we come across all the time in lithium ion batteries).

Finally, all this talk about lithium intercalation into graphite wouldn't be possible unless we had a counter-reaction to balance the charges. The "Ryden battery" claims to use a negative ion intercalating into graphite as the cathode. There are a couple of ions that could do this but ones that comes to mind are halides like fluorine or bromine (I actually couldn't find much literature on the electrochemistry of negative ion intercalation compounds: a bit of a red flag actually). We have other evidence to think this might be the case: Power Japan Plus has a cell voltage of around 4.5V. Solid state electrochemistry is very different than aqueous chemistry, but it still follows similar trends; in order to get a cell voltage that large with lithium on one side, you need a very electronegative electrochemical couple. Fluorine or bromine will do it. In an aqueous system (which is impossible for materials compatibility reasons, but for this exercise let's go ahead and consider it), a LiF battery would have 5.8V, which is close to what the "Ryden battery" is. And fluorine is often featured in lithium ion battery electrolytes, adding support to our guess that the other couple is a halide. This goes against their idea that their all-cotton battery is earth friendly if it contains one of the most poisonous elements on the periodic table. I'm not faulting Power Japan Plus for their chemistry; I'm faulting them for their messaging. If they think cotton in a 3000C furnace (which is just carbon at that point) and fluorine are "earth friendly" then who am I to argue. 

The long and short of this analysis is we have a battery that has the same fundamental limitations as lithium ion batteries (the graphite anode), with a cell architecture that has serious capacity limits, and questionable negative electrode chemistry. 

I don't mean to be a killjoy, but energy storage is a serious problem that demands serious answers. This is not one of those serious answers; let's stop treating it that way.

--------------------------------------------------------------------------------------------------------------------------------
[1]: http://powerjapanplus.com/battery/equation.html
[2]: http://www.electrochem.org/dl/interface/sum/sum12/sum12_p045_049.pdf

Sunday, May 18, 2014

Shale Gas: Water, Water Everywhere, but Not a Drop to Drink (2 of 5)

I'll first start by saying this post is not to serve as a complete justification or vindication of shale gas and hydraulic fracturing; it is to provide context. Context can make the case seem better or worse than we expect, but either way we walk away a little more informed.

That said, I outlined my list of grievances (which I believe most concur with) in my earlier post. I'm going to spike out my major concerns that I think merit a closer inspection first. Let's first start with the 1st grievance, that concerning water.

1) Exorbitant use of fresh water.

As I mentioned before, fracking operations use between 4 and 6 million gallons of fresh water in their initial hydraulic fracturing run to break open the shale and release natural gas for roughly a year. I calculated this to be the equivalent of the annual water consumption of 4 US homes per well. There are 400,000 gas wells in the US [1], and 40% of natural gas now comes from shale source [2]. I'm assuming all wells are equal in capacity (not true, but for our purposes not a bad approximation). This means there are 750,000 homes-worth of water being consumed each year in shale operations. Vikram Rao, author of Shale Gas: The Promise and the Peril, believes that underground salt water can be used for the initial frack and that this should be mandated through regulations. Until best practices reveal the means by which salt water can be used, gas drilling companies will use millions of gallons of fresh water for fracking. I thought this was a lot of fresh water. Turns out it's only part of the story.

All thermal power plants require cool fresh water as a heat rejection mechanism. Power plants prefer fresh water over briny water for the same reason fracking wells do: lower risk of scale formation, less corrosion susceptibility, easier materials compatibility. The thing is, power plants use A LOT of water to keep cool. 49% of all water withdrawals in the US is for power plant cooling. That's 200 billion gallons a day. That's a little less than 2 Mississippi Rivers-worth of water. The next closest is irrigation to grow our crops at 31% [3]. I find this number staggering. Admittedly, this is for withdrawals which is different than consumption, and depending on the type of cooling system used, a lot of that water may be returned to the environment. 

Regardless, it's still a large number that dwarfs the water consumption used in fracking. A report from the Harvard Kennedy School estimates that water used in fracking constitutes less than 10% of the water consumed when shale gas is used in a high efficiency combined cycle gas turbine and low-water consumption recirculating cooling (the typical construction cases these days), and that this is a factor 2 lower water consumption/MWh than coal (which uses 2x as much water/energy content in washing coal than fracking) and factor 4 better than nuclear [4]. The Kennedy report is quick to add that hydraulic fracturing water consumption can stress water resources locally due to the short-duration/high-rate at which water is consumed, even though the gross consumption is relatively small. I'll add that the water consumption for wind and solar PV are essentially zero. 

There's a few things I think are important to point out here. The reason natural gas comes out so far ahead of coal and nuclear is mainly due to power plant thermal efficiency. If your power plant is less efficient, more thermal energy needs to be rejected for a given amount of work, so more water is necessary to keep things cool. Combined cycle gas turbine plants are basically two power plants in one (a gas turbine top cycle and steam turbine bottoming cycle), and as such run somewhere in the vicinity of 45% efficient. You can't run a gas turbine on coal (some people are trying though), so you can't leverage a topping cycle, so you're stuck with just one steam turbine cycle and an efficiency of about 33%. Nuclear is de-rated on efficiency for safety and process considerations, so comes in around 29% [5]. The other element I was surprised about was the water intensity of other energy extraction processes. As I mentioned earlier, coal mining is about 2x more water intensive that natural gas fracking, and uranium mining 10x more intensive. It seems to me that not only are we using too much fresh water with even our lowest demanding energy process, but we completely ignore significantly more demanding processes. I find that concerning. The silver lining about all this water research is the fact that 2 renewable energy generation methods, wind and solar PV, use no water at all. This point I believe should be emphasized. 

I think that water use in energy generation is actually a really big problem. It seems in this regard shale gas and hydraulic fracturing for combined cycle gas turbine plants are actually better than the conventional coal or nuclear power plants. While water is a large problem, it's not the only one. We'll look at other environmental impacts in subsequent posts.

----------------------------------------------------------------------------------------------------------------------------------
[1] http://www.eia.gov/dnav/ng/ng_prod_wells_s1_a.htm
[2] http://www.eia.gov/forecasts/aeo/MT_naturalgas.cfm#natgas_prices?src=Natural-b1
[3] http://pubs.usgs.gov/fs/2009/3098/pdf/2009-3098.pdf
[4] http://belfercenter.ksg.harvard.edu/files/ETIP-DP-2010-15-final-4.pdf
[5] http://www.eia.gov/electricity/annual/html/epa_08_02.html