Home Game Development The Sam Altman saga casts a shadow on “ethical” AI

The Sam Altman saga casts a shadow on “ethical” AI

0
The Sam Altman saga casts a shadow on “ethical” AI

[ad_1]

The saga of Sam Altman’s firing and re-hiring from OpenAI happened at a distance from the video game industry. Some game developers have been experimenting with the GPT-4 API to create chatbot-like NPCs, but major platform owners like Valve have signaled they won’t allow games built on the model to be sold without proof they were built on data owned by the developer.

That wrinkle in the video game industry’s AI adoption speaks to one adjective bandied about by developers when discussing generative AI tools: the word “ethical.” We started hearing the phrase as early as 2017, and Unity outlined its plans for ethical AI in 2018. Throughout 2023 we’ve heard AI developers big and small roll out the phrase, seemingly with the awareness that there is general unease about how AI tools are made and how they are used.

Last week’s events, tumultuous as they were, should make things clear for developers: when push comes to shove, it’s profits, not ethics, that are driving the AI boom–and those loudly championing the ethics of their own AI tools deserve the most scrutiny.

The concerns over AI ethics are valid

2023 has given us a bounty of case studies to unpack why developers are worried about the “ethics” of generative AI. Unity’s Marc Whitten explained to us in a recent chat that the company’s AI tools have been ethically designed so developers can ensure they own their data, and that the data used to make their game content has been properly licensed.

That explanation addressed concerns about data ownership and generative AI tools, which have been repeatedly shown to be harvesting words and images that the developers did not have the rights to.

The flip side of the ethical AI coin is the deployment of the tools. Voice actors have become the first victims of AI deployment, as companies have either pressured them to sign away their voices for future replication or watched as too-eager fans ran their voices through commercially available tools to mod them into other games.

Evie Fry from Assassin's Creed Syndicate
Image via Ubisoft.

This threatens to not only take away their jobs but force words into their mouth that they never said—a pretty violating sensation if your job is to use your voice to perform.

With such high-profile examples developers are right to be worried about the ethical deployment of “AI.” But the strange saga of Sam Altman’s ouster and re-coronation at OpenAI—an organization supposedly structured to prioritized ethics over profits—shows that ethics are already being deprioritized by the day.

A non-profit that will ultimately drive profits

The heart of Altman’s ouster was actually rather surprising. When the company announced his termination on Friday it was reasonable to assume that dramatic accusations against the CEO were about to drop. But we have to give him some credit—none did.

At the time, OpenAI’s board said that Altman had been terminated for not being “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

Instead what emerged was that the firing was an internal conflict over the “speed” of creating generative AI tools and whether the company should be chasing revenue so quickly. This debate is baked into the company’s structure, where the corporation OpenAI is owned by the non-profit OpenAI supposedly to ensure safety and ethics are prioritized over a reckless bid for profits.

There are echoes of this structure across the AI industry. In 2022 Ars Technica reported on the discovery of private medical photos contained in the open-source LAION dataset, which fuels the technical prowess of tools like Stable Diffusion. LAION is also the product of a non-profit organization, but Stable Diffusion’s owner, Stability AI, is a for-profit company.

That pipeline of data flow doesn’t look good under a certain light. AI researchers spin up non-profits to build machine learning-friendly datasets. Said data then fuels for-profit corporations which attract investors who fund bigger tools with the hope of seeing bigger returns and here we are again in another Big Tech bubble.

Are these non-profits truly non-profits? Or are they a means of laundering data and ethics to bolster their for-profit cousins?

It was ultimately investors—including Microsoft and its CEO Satya Nadella—who flexed their grip on OpenAI after Altman was ousted.

Whatever violations Altman had supposedly done to warrant such a sharp and sudden punishment clearly weren’t a problem for them. What they were worried about was the deposal of a charismatic CEO who was leading the company in spinning up new AI products.

To be fair to Altman’s defenders, no accusations about his behavior have surfaced in the days since and his return to OpenAI was heralded by a massive show of support by employees (though a Washington Post sheds additional light on why the board was losing trust in Altman). Under those circumstances I wouldn’t want to see someone I trusted and invested in deposed either. With hindsight being 20/20, it seems clear that his firing wasn’t fair and wasn’t good for the company.

We’re left with an uncomfortable conclusion: if OpenAI’s board of directors had real ethical concerns about where Altman was taking its corporate subsidiary, those concerns should have been shared either with investors or employees. If the board’s role is to guard against the unethical use of AI (or the science-fiction-founded premise of the creation of “artificial generative intelligence, or AGI), then this was supposedly its big moment to do so.

That its concerns could be toppled in less than a week however, shows a rather sad truth: that OpenAI’s ethics-minded mission may not have been about ethics after all.

A lesson in ethics for AI in game development

With the Sam Altman employment saga (hopefully) behind us, we can take these lessons back to the world of game development. Generative AI tools will see widespread adoption in the next couple of years, and it’s more than likely developers will be able to appease gatekeepers like Valve and use content made by such tools after demonstrating they do own all the data that went into using them.

A lot of those tools will be mundane—after all the game industry already uses procedural generation and machine learning to speed up tasks that used to take hours. There are unequivocal wins in the world of AI tooling, and plenty of humdrum uses of the technology aren’t weighed down by the ethical concerns raised by the industry.

But now that we know how OpenAI’s nonprofit arm responded to what it saw as a serious ethics challenge we have a benchmark for evaluating discussion of “ethics” in the video game world. Those who deploy language like OpenAI should receive the greatest scrutiny and be taken to task if it really seems like they’re using the phrase as a shield for pure profiteering. Those who can actually speak to the underlying ethical concerns of generative AI without relying on buzzwords should be praised for doing so.

Image via Ubisoft.

I can’t believe I’m writing this but it’s Ubisoft who I consider to be a standout example of the latter category. The public rollout of its Ghostwriter tool was soaked with cringy tech industry energy, but in a presentation to developers at the 2023 Game Developers Conference Ubisoft La Forge researcher Ben Swanson spoke eloquently about who the tool is meant to benefit, where it’s sourcing its data from, and what developers can do to ensure a proper and legal chain of data ownership.

Swanson even tipped his hand about why developers should act with self interest when selecting certain AI tools: plugging in the API of open-source AI developers puts their own data at risk. Using proprietary methods and picking more selective means of data modelling isn’t just good for ethical data ownership, it’s good for company security too.

His point was given rather public potency just weeks later when Samsung engineers foolishly linked internal documents and source code when playing around with ChatGPT.

If game developers want a proper course in the ethical questions of generative AI, they’re better off turning to philosophers like video essayist Abigail Thorn. In the end it will be ethicists—those toiling away in academia over how humans determine right and wrong—who will shine a light on the “ethics” of AI.

Everything else is just marketing.

GDC and Game Developer are sibling organizations under Informa Tech

[ad_2]