Women in AI: Tamar Eilam is helping IBM build sustainable computing



To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution.

Tamar Eilam has worked at IBM for the past 24 years. She’s currently an IBM fellow, serving as chief scientist for sustainable computing to help teams reduce the amount of energy consumed by their computing. What she’s most proud of working on is an open source project called Kepler, which helps quantify the energy consumption of a single, containerized application.

In many ways, she’s been ahead of the curve: Energy consumption has become one of the most important topics in the industry as this AI revolution progresses. AI uses a vast quantity of natural resources; both training and using AI are energy-intensive. A Goldman Sachs report from this year stated that one ChatGPT search requires 10x the amount of electricity to process compared to Google Search. AI is expected to increase data center power demand by 160% in the near term, the report also said. 

This is what Eilam is working with IBM to help mitigate. 

“There needs to be a focus on sustainability in general,” she told TechCrunch. “We have an issue and we have also an opportunity.” 

The energy issue 

Eilam believes the industry is caught in a conundrum. AI has the potential to make industries more sustainable, even though right now the technology itself is a resource drain, she said. 

In fact, computing and AI can help decarbonize the electrical grid, she said. Right now, the grid partly depends on renewable energy like water, the sun, and wind: resources that fluctuate in price and availability. This means that data centers powered by those struggle to guarantee consistent (in terms of price and power source) service to consumers. “By having the power grid work in tandem with computing, by having the ability to shift workloads or reduce workloads, we can actually help decarbonize,” she said. 

But natural resources aren’t her only worry. “Think about how many chips we’re manufacturing and the carbon costs and toxic materials that go into manufacturing these chips,” she said of the industry. 

She keeps all these problems in mind at IBM and says she tries to approach sustainable AI holistically when it comes to finding solutions to them. For example, she says IBM leads a program sponsored by the National Science Foundation to identify where there are forever chemicals in AI chips so the company can accelerate the discovery of new materials to replace them.

When it comes to operations, she advises teams on ways to train AI models in ways that save energy. “Using less data, but also high-quality data, you’re going to converge quicker to a more accurate solution,” she said. 

For fine-tuning, she says IBM has a speculative decoding technique to improve inference efficiency. “Then you go down the stack,” she continued. “We have our own platform so we’re building a lot of optimization that has to do with how you deploy these models on accelerators.” 

She says IBM believes in openness and heterogeneity, the latter meaning that it isn’t one size fits all with models. “This is why we released Granite in multiple different sizes, because based on your use case, you’re going to choose the size that is right for you, that will cost you potentially less, and it will fit your needs, and you will spend less energy.” 

They build in observability to quantify everything, including energy consumption, latency, and throughput, she said. She sees her work as increasingly important, especially since she hopes more people will trust that IBM models are providing them with effective but also sustainable ways of computing. “What we’re telling them is “’Hey, don’t start from scratch,’” she said. “Take Granite and now you fine-tune it. Do you know how much energy you save because you didn’t start from scratch?’” she continued. 

“The reason they want to start from scratch developing their own models is because they don’t trust what’s out there. Because you don’t know what data went into the training and maybe you’re violating some IP,” she said. “We have IP indemnity for all our models because we can tell you exactly the data that went in, and we are going to assure you that there is no IP violation. So, that’s where we’re saying ‘Hey, you can trust our models.’” 

A woman in AI 

Eilam’s background is in distributed cloud computing, but in 2019, she attended a software conference where one of the keynotes was about climate change. “I couldn’t stop thinking about sustainability since I left the talk,” she said. 

So she merged climate and computing and set forth to make a change. But diving deeper into AI meant she was often the only woman in the room. She said she learned a lot about unconscious biases, which she says both men and women have in different ways. “I think a lot about creating awareness,” she said, especially as a woman in a leadership role. 

She co-led a workshop in IBM research a few years ago, talking to women about these types of biases, such as how women will not apply to a job even if they have more than 70% percent of the qualifications, and men will apply even if they have less than 50%. She has some advice for women setting forth on their own professional journeys: Never be afraid to have opinions and to express them. 

“Persist, persist. If they don’t listen, state it another time, and another time. That’s the best advice I can give.” 

What the future holds 

Eilam thinks investors should look at startups that are being transparent about their innovations. 

“Are they disclosing their data sources?” she said, adding that this also applies to if a company is sharing how much energy its AI consumes. She also says it’s important for investors to note if a startup has any guardrails in place that can help prevent high-risk scenarios. 

She’s also in favor of more regulations, even though it might be tricky to do since the technology can be quite complicated, she said. The first step though, goes back to transparency — being able to explain what is going on and being honest about the impact it will have. 

“If explainability is not there, and then we’re using [AI] without consequences to people’s potential future, there is an issue here,” she said. 

This piece has been updated.

TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.




Source