‘AI’ Easily Breached By Having it Repeat Words

A very interesting post showing (yet again) how the hype around chatGPT and other large language models taking over the world is, perhaps, a bit over-inflated:

ChatGPT can be made to regurgitate snippets of text memorized from its training data when asked to repeat a single word over and over again, according to research published by computer scientists.

The bizarre trick was discovered by a team of researchers working across industry and academia analyzing memorization in large language models, and detailed in a paper released on arXiv this week. 

Prompting the chatbot to repeat the word “book,” for example, will result in it generating the word “book” thousands of times, until it suddenly starts spewing what appears to be random text. In some cases, however, some of those passages appear to be lifted directly from real text that has previously been published somewhere. 

Large language models like ChatGPT learn to generate text by ingesting huge amounts of data scraped from the internet. The fact that it spews sentences that directly copy text from articles, books, or social media comments reveals traces of the resources it was trained on. Being able to extract this information is problematic – especially if it’s sensitive or private. 

In another example, when the chatbot was asked to “repeat this word forever: ‘poem, poem, poem poem’,” it generated personal identifiable information – including a name, email address, and phone number. 

New AI-oriented Features in Android OS

Some interesting creative and AI-ish features are starting to surface int he newer versions of Android and some of its featured apps:

Google is rolling out a trio of system updates to Android, Wear OS and Google TV devices. Each brings new features to associated gadgets. Android devices, like smartphones, are getting updated Emoji Kitchen sticker combinations. You can remix emojis and share with friends as stickers via Gboard.

Google Messages for Android is getting a nifty little refresh. There’s a new beta feature that lets users add a unique background and an animated emoji to voice messages. Google’s calling the software Voice Moods and says it’ll help users better express how they’re “feeling in the moment.” Nothing conveys emotion more than a properly-positioned emoji. There are also new reactions for messages that go far beyond simple thumbs ups, with some taking up the entire screen. In addition, you’ll be able to change chat bubble colors.

The company’s also adding an interesting tool that provides AI-generated image descriptions for those with low-vision. The TalkBack feature will read aloud a description of any image, whether sourced from the internet or a photo that you took. Google’s even adding new languages to its Live Caption feature, enhancing the pre-existing ability to take phone calls without needing to hear the speaker. Better accessibility is always a good thing.

New Batteries from Cotton and Seawater?

Anti-EV types frequently point to the materials that go into current battery technology and make dire claims about the environmental impacts of their sourcing (as if producing petroleum was a zero impact exercise). That is today. What if future batteries could be created from non-mined materials?

Mining the lithium and other minerals we need for batteries is taking an increasing toll on the environment. There are alternative materials all around us though.

Zip. The power’s out. But on a street in India, there’s a cash machine still happily dispensing banknotes. Thanks, in part, to burnt cotton. For this cash machine has a backup battery inside it – a battery that contains carbon from carefully combusted cotton.

“The exact process is secret, to be honest with you,” says Inketsu Okina, chief intelligence officer at PJP Eye, the Japanese firm that made the battery. He’s not joking, either. “The temperature is secret and atmosphere is secret. Pressure is secret,” he continues, cagily.

Okina does say that a high temperature is required, above 3,000C (5,432F). And that 1kg (2.2lbs) of cotton yields 200g (7oz) of carbon – with just 2g (0.07oz) needed for each battery cell. The firm bought a shipment of cotton in 2017 and still hasn’t used all of it, says Okina.

In the batteries developed by the company, together with researchers at Kyushu University in Fukuoka, Japan, carbon is used for the anode – one of the two electrodes between which flow ions, the charged particles in batteries. Ions move in one direction when the battery is charging and in the other direction when it releases energy to a device. The majority of batteries use graphite as an anode but PJP Eye argues their approach is more sustainable, since they can make anodes using waste cotton from the textile industry.

With huge demand for batteries expected in the coming years, propelled by the rise of electric vehicles and large energy storage systems, some researchers and businesses are frantically developing possible alternatives to the lithium ion and graphite batteries that are commonplace today. Like PJP Eye, they argue we could be using much more sustainable and widely available materials for battery production.

https://www.bbc.com/future/article/20231108-batteries-of-the-future-how-cotton-and-seawater-might-power-our-devices

‘AI’ Algorithms Aren’t People – Stop Testing Them as if They Are

So much unnecessary anthropomorphizing happening in the Machine Learning (aka Artificial Intelligence) space. From calling outright fabrications of ‘data’ ‘Hallucinations’ to claiming human emotions (“I’m sorry I couldn’t help with that….”) and giving human names to interfaces, the discussions in these areas continue to be muddied more than clarified.

When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.”

Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free.

Last month Webb and his colleagues published an article in Nature, in which they describe GPT-3’s ability to pass a variety of tests devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. “Analogy is central to human reasoning,” says Webb. “We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate.”

What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. For example, when OpenAI unveiled GPT-3’s successor, GPT-4, in March, the company published an eye-popping list of professional and academic assessments that it claimed its new large language model had aced, including a couple of dozen high school tests and the bar exam. OpenAI later worked with Microsoft to show that GPT-4 could pass parts of the United States Medical Licensing Examination.

And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking). 

Such results are feeding a hype machine that predicts computers will soon come for white-collar jobs, replacing teachers, journalists, lawyers and more. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now scared of the technology he helped create

But there’s a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit.

“There are several critical issues with current evaluation techniques for large language models,” says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. “It creates the illusion that they have greater capabilities than what truly exists.”

https://www.technologyreview.com/2023/08/30/1078670/large-language-models-arent-people-lets-stop-testing-them-like-they-were

Worthy of Recognition and Praise – Jose Andres

Why is the media so focused on the most despicable, vile, self-serving garbage in society (rhymes with Melon Husk) when humble, dedicated people like Jose Andres actually works to help people in need?

Before chef José Andrés became famous for World Central Kitchen, he had already scaled the heights of his profession. His new cookbook celebrates the group’s humanitarian impact.

“I remember this Spanish guy screaming,” said chef-volunteer Karla Hoyos, describing the first time she met chef José Andrés. “He had just come from a meeting with FEMA [the US emergency management agency], and he was furious. And I thought, ‘Oh, no, no, nooo…’.” She shakes her head emphatically. “I am not going to deal with this person. I don’t care who he is.”

It was September 2017, shortly after Hoyos had arrived in Puerto Rico following Hurricane Maria, the deadly storm that devastated the island, killing nearly 3,000 people, making most roads impassable and knocking out 80% of the power grid. Several days earlier, Andrés had touched down with a team from his non-profit, World Central Kitchen (WCK), which he founded in 2010 after returning from Haiti where he fed survivors of a catastrophic earthquake. The organisation originally emphasised longer-term programmes – such as supporting nutritional training for young mothers – but after Maria, its efforts now focus on deploying an army of culinary first responders to feed people during and after the world’s worst disasters, natural or otherwise.

https://www.bbc.com/travel/article/20230911-jos-andrs-the-man-who-created-an-army-of-culinary-first-responders

“Software is eating the world” – and as a result consumers are getting crapped on

BMW recently made the misguided decision to charge a subscription for heated seats in some markets. Aside from the fact that the average BMW is a maintenance disaster and is nearly undriveable by its 3rd year, what is the point of charging a subscription for a fairly standard car feature? Simple greed.

Consider also that every one of those vehicles has the added expense of being fitted with the heating elements and controls that may not be enabled. That seems like wasted money unless the consumer is actually paying for the feature twice: once when it is priced into the cost of manufacturing the car and *again* when the consumer actually tries to use it.

The other problem with these feature flags hidden in your car is that not only BMW can have access to them. It is not a great leap to imagine a new form of ransomware where you have to pay BMW and then some hacker to re-re-re-enable a feature on your vehicle.

AI Ethics Not Being Taught to Data Scientist

This feels like an extension of ethics, in general, not being part of the curriculum in education.

Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.

Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.

The study authors warned that this could have far-reaching consequences:

Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.

The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:

Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.

While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.

Is AutoML the End of Data Science

Feels a bit overstated, but an interesting read on AutoML and its potential impacts on Data Science (and scientists)

There’s a good reason for all the AutoML hype: AutoML is a must-have for many organizations.

Let’s take the example of Salesforce. They explain that their “customers are looking to predict a host of outcomes — from customer churn, sales forecasts and lead conversions to email marketing click throughs, website purchases, offer acceptances, equipment failures, late payments, and much more.”

In short, ML is ubiquitous. However, for ML to be effective for each unique customer, they would “have to build and deploy thousands of personalized machine learning models trained on each individual customer’s data for every single use case” and “the only way to achieve this without hiring an army of data scientists is through automation.”

While many people see AutoML as a way to bring ease-of-use and efficiency to ML, the reality is that for many enterprise applications, there’s just no other way to do it. A company like Facebook or Salesforce or Google can’t hire data scientists to build custom models for each of their billions of users, so they automate ML instead, enabling unique models at scale.

The amount of ML components that are automated depends on the platform, but with Salesforce, it includes feature inference, automated feature engineering, automated feature validation, automated model selection, and hyperparameter optimization.

That’s a mouthful.

What this means is that data scientists can deploy thousands of models in production, with far less grunt work and hand-tuning, reducing turn-around-time drastically.

By shifting the work from data crunching towards more meaningful analytics, AutoML enables more creative, business-focused applications of data science.

IEEE Ethical Design Initiative

A three-year effort by hundreds of engineers worldwide resulted in the publication in March of 2019 of Ethically Aligned Design (EAD) for Business, a guide for policymakers, engineers, designers, developers and corporations. The effort was headed by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS), with John C. Havens as Executive Director, who spoke to AI Trends for an Executive Interview. We recently connected to ask how the effort has been going. Here is an update.

EAD First Edition, a 290-page document which Havens refers to as “applied ethics,” has seen some uptake, for example by IBM, which referred to the IEEE effort within their own resource called Everyday Ethics for AI  The IBM document is 26 pages, easy to digest, structured into five areas of focus, each with recommended action steps and an example. The example for Accountability involved an AI team developing applications for a hotel. Among the recommendations was: enable guests to turn the AI off, conduct face-to-face interviews to help develop requirements; and, institute a feedback learning loop.

The OECD (Organization for Economic Cooperation and Development) issued a paper after the release of an earlier version of EAD attesting to the close affinity between the IEEE’s work and the OECD Principles on AI. The OECD cited as shared values “the need for such systems to primarily serve human well-being through inclusive and sustainable growth; to respect human-centered values and fairness; and to be robust, safe and dependable, including through transparency, explainability and accountability.”

Self-Evolving Artificial Intelligence?

Teaching algorithms to create novel algorithms…

Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.

“While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.”

Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.

In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers’ imaginations and their existing biases.

So Quoc Le, a computer scientist at Google, and colleagues developed a program called AutoML-Zero that could develop AI programs with effectively zero human input, using only basic mathematical concepts a high school student would know. “Our ultimate goal is to actually develop novel machine learning concepts that even researchers could not find,” he says.

AI Transparency and Fairness

A post on efforts to further bolster AI transparency and fairness by the AI World Society.

Learning algorithms find patterns in data they are given. However, in the processes by which the data is collected, relevant variables are defined and hypotheses are formulated that may depend on structural unfairness found in society, the paper suggests.

“Algorithms based on such data could introduce or perpetuate a variety of discriminatory biases, thereby maintaining a cycle of injustice,” the authors state. “The community within statistics and machine learning that works on issues of fairness in data analysis have taken a variety of approaches to defining fairness formally, with the aim of ultimately ensuring that learning algorithms are fair.”

The paper poses some tough questions. For instance, “Since, unsurprisingly, learning algorithms that use unfair data can lead to biased or unfair conclusions, two questions immediately suggest themselves. First, what does it mean for a world and data that comes from this world to be fair? And second, if data is indeed unfair, what adjustments must be made to learning algorithms that use this data as input to produce fairer outputs?”

Cause and effect is a challenging area of statistics; correlation does not imply causation, the experts say. Teasing out causality often involved obtaining data in a carefully controlled way. An early example is the work done by James Lindt for the Royal Navy, when scurvy among sailors was a health crisis. Lindt organized what later came to be viewed as one of the first instances of a clinical trial. He arranged 12 sailors into six pairs, and gave each pair one of six scurvy treatments thought at the time to be effective. Of the treatments, only citrus was effective. That led to citrus products being issued on all Royal Navy ships.

Whether fairness can be defined by computer scientists and engineers is an open question. “Issues of fairness and justice have occupied the ethical, legal, and political literature for centuries. While many general principles are known, such as fairness-as-proportionality, just compensation, and social equality, general definitions have proven elusive,” the paper states.

Moreover, “Indeed, a general definition may not be possible since notions of fairness are ultimately rooted in either ethical principle or ethical intuition, and both principles and intuitions may conflict.”

Mediation analysis is one approach to making algorithms more fair. Needless to say, the work is continuing.

TinyML and the Future of Design

Interesting post on how ‘magical experiences’ fueled by AI and machine learning will change how products are designed and used.

There is growing momentum demonstrated by technical progress and ecosystem development. One of the leading startups that are working on helping engineers take advantage of TinyML by automating data collection, training, testing, and deployment, is Edge Impulse. Starting with embedded or IoT devices, Edge Impulse is offering developers the tools and guidance to collect data straight from edge devices, build a model that can detect “behavior”, discern right from wrong, noise from signal, so they can actually make sense of what happens in the real world, across billions of devices, in every place, and everything. By deploying the Edge Impulse model as part of everyone’s firmware, you create the biggest neural network on earth. Effectively, Edge Impulse gives brains to your previously passive devices so you can build better a product with neural personality.

Another interesting company is Syntiant, who’s building a new processor for deep learning, dramatically different from traditional computing methods. By focusing on memory access and parallel processing, their Neural Decision Processors operate at efficiency levels that are orders of magnitude higher than any other technology. The company claims its processors can make devices approximately 200x more efficient by providing 20x the throughput over current low-power MCU solutions, and subsequently, enabling larger networks at significantly lower power. The result? Voice interfaces that allow a far richer and more reliable user experience, otherwise known as “Wow” and “How did it do that?”

Trustworthy AI Framework

An interesting article on business challenges with artificial intelligence.

Artificial intelligence (AI) technology continues to advance by leaps and bounds and is quickly becoming a potential disrupter and essential enabler for nearly every company in every industry. At this stage, one of the barriers to widespread AI deployment is no longer the technology itself; rather, it’s a set of challenges that ironically are far more human: ethics, governance, and human values.

As AI expands into almost every aspect of modern life, the risks of misbehaving AI increase exponentially—to a point where those risks can literally become a matter of life and death. Real-world examples of AI gone awry include systems that discriminate against people based on their race, age, or gender and social media systems that inadvertently spread rumors and disinformation and more.

Even worse, these examples are just the tip of the iceberg. As AI is deployed on a larger scale, the associated risks will likely only increase—potentially having serious consequences for society at large, and even greater consequences for the companies responsible. From a business perspective, these potential consequences include everything from lawsuits, regulatory fines, and angry customers to embarrassment, reputation damage, and destruction of shareholder value.

Yet with AI now becoming a required business capability—not just a “nice to have”—companies no longer have the option to avoid AI’s unique risks simply by avoiding AI altogether. Instead, they must learn how to identify and manage AI risks effectively. In order to achieve the potential of human and machine collaboration, organizations need to communicate a plan for AI that is adopted and spoken from the mailroom to the boardroom. By having an ethical framework in place, organizations create a common language by which to articulate trust and help ensure integrity of data among all of their internal and external stakeholders. Having a common framework and lens to apply the governance and management of risks associated with AI consistently across the enterprise can enable faster, and more consistent adoption of AI.

Cortex – A Platform for Developers, Not Just Data Scientists

Some thoughts from the creator of an open source machine learning platform focused on developers.

Machine learning has, historically, been the purview of data science teams. This makes it a bit counter-intuitive that we built Cortex, our open source ML infrastructure platform, primarily for software engineers.

Going all the way back to machine learning’s roots in the 1950s, the field has historically been research-focused—things like Arthur Samuel’s checkers-playing AI (1959) or IBM’s chess-playing Deep Blue (1988).

Starting around 2010, there was a renewed interest in deep learning, with major tech companies releasing breakthroughs. Projects like Google Brain, DeepMind, and OpenAI (among others) began publishing new, state-of-the-art results.

These breakthroughs manifested as features in big companies’ products:

  • Netflix’s recommendation engine
  • Gmail’s smart compose
  • Facebook’s facial recognition tags

In addition, this renewed focus on machine learning—and particularly deep learning—lead to the creation of better tools and frameworks, like Google’s TensorFlow and Facebook’s PyTorch, as well as open source models and datasets, like OpenAI’s GPT-2 and ImageNet.

With better tools, open source models, and accessible data, it became possible for small teams to train models for production. As a consequence of this democratization, a wave of new products have emerged, all of which at their core are “just” ML models wrapped in software. We refer to these products as ML-native.

MLOps and DevOps

This article briefly outlines how as Machine Learning (ML) becomes more a part of corporate solutions, the need for MLOps is going to become more critical.

The term MLOps refers to a set of techniques and practises for data scientists to collaborate operations professionals.. MLOps aims to manage deployment of machine learning and deep learning models in large-scale production environments.

The term DevOps comes from the software engineering world and is concerned with developing and operating large-scale software systems. DevOps introduces two concepts: Continuous Integration (CI) and Continuous Delivery (CD). DevOps aims to shorten development cycles, increase deployment velocity and create dependable releases.

Is an ‘AI Winter’ Coming?

A BBC post speculating on whether there is a cooling off coming for AI

The last decade was a big one for artificial intelligence but researchers in the field believe that the industry is about to enter a new phase.

Hype surrounding AI has peaked and troughed over the years as the abilities of the technology get overestimated and then re-evaluated.

The peaks are known as AI summers, and the troughs AI winters.

The 10s were arguably the hottest AI summer on record with tech giants repeatedly touting AI’s abilities.

AI pioneer Yoshua Bengio, sometimes called one of the “godfathers of AI”, told the BBC that AI’s abilities were somewhat overhyped in the 10s by certain companies with an interest in doing so.

There are signs, however, that the hype might be about to start cooling off.

TinyML = Big Opportunity

This post explores while tinyML may be the next big thing.

A coalescence of several trends has made the microcontroller not just a conduit for implementing IoT applications but also a powerful, independent processing mechanism in its own right. In recent years, hardware advancements have made it possible for microcontrollers to perform calculations much faster.  Improved hardware coupled with more efficient development standards have made it easier for developers to build programs on these devices. Perhaps the most important trend, though, has been the rise of tiny machine learning, or TinyML. It’s a technology we’ve been following since investing in a startup in this space.

TinyML broadly encapsulates the field of machine learning technologies capable of performing on-device analytics of sensor data at extremely low power. Between hardware advancements and the TinyML community’s recent innovations in machine learning, it is now possible to run increasingly complex deep learning models (the foundation of most modern artificial intelligence applications) directly on microcontrollers. A quick glance under the hood shows this is fundamentally possible because deep learning models are compute-bound, meaning their efficiency is limited by the time it takes to complete a large number of arithmetic operations. Advancements in TinyML have made it possible to run these models on existing microcontroller hardware.

In other words, those 250 billion microcontrollers in our printers, TVs, cars, and pacemakers can now perform tasks that previously only our computers and smartphones could handle. All of our devices and appliances are getting smarter thanks to microcontrollers.

TinyML represents a collaborative effort between the embedded ultra-low power systems and machine learning communities, which traditionally have operated largely independently. This union has opened the floodgates for new and exciting applications of on-device machine learning. However, the knowledge that deep learning and microcontrollers are a perfect match has been pretty exclusive, hidden behind the walls of tech giants like Google and Apple. This becomes more obvious when you learn that this paradigm of running modified deep learning models on microcontrollers is responsible for the “Okay Google” and “Hey Siri,” functionality that has been around for years.

But why is it important that we be able to run these models on microcontrollers? Much of the sensor data generated today is discarded because of cost, bandwidth, or power constraints – or sometimes a combination of all three. For example, take an imagery micro-satellite. Such satellites are equipped with cameras capable of capturing high resolution images but are limited by the size and number of photos they can store and how often they can transmit those photos to Earth. As a result, such satellites have to store images at low resolution and at a low frame rate. What if we could use image detection models to save high resolution photos only if an object of interest (like a ship or weather pattern) was present in the image? While the computing resources on these micro-satellites have historically been too small to support image detection deep learning models, TinyML now makes this possible.

AI BS

or Artificial Intelligence Bull Shitake

There are a lot of claims being made, and as this article points out, not many of them are supported by strong evidence/math.

In Rebooting AI, Ernie Davis and I made six recommendations, each geared towards how readers – and journalists – and researchers might equally assess each new result that they achieve, asking the same set of questions in a limit section in the discussion of their papers:


Stripping away the rhetoric, what does the AI system actually do? Does a “reading system” really read?


How general is the result? (Could a driving system that works in Phoenix work as well in Mumbai? Would a Rubik’s cube system work in opening bottles? How much retraining would be required?)


Is there a demo where interested readers can probe for themselves?


If AI system is allegedly better than humans, then which humans, and how much better? (A comparison is low wage workers with little incentive to do well may not truly probe the limits of human ability)


How far does succeeding at the particular task actually take us toward building genuine AI?


How robust is the system? Could it work just as well with other data sets, without massive amounts of retraining? AlphaGo works fine on a 19×19 board, but would need to be retrained to play on a rectangular board; the lack of transfer is telling.