New Batteries from Cotton and Seawater?

Anti-EV types frequently point to the materials that go into current battery technology and make dire claims about the environmental impacts of their sourcing (as if producing petroleum was a zero impact exercise). That is today. What if future batteries could be created from non-mined materials?

Mining the lithium and other minerals we need for batteries is taking an increasing toll on the environment. There are alternative materials all around us though.

Zip. The power’s out. But on a street in India, there’s a cash machine still happily dispensing banknotes. Thanks, in part, to burnt cotton. For this cash machine has a backup battery inside it – a battery that contains carbon from carefully combusted cotton.

“The exact process is secret, to be honest with you,” says Inketsu Okina, chief intelligence officer at PJP Eye, the Japanese firm that made the battery. He’s not joking, either. “The temperature is secret and atmosphere is secret. Pressure is secret,” he continues, cagily.

Okina does say that a high temperature is required, above 3,000C (5,432F). And that 1kg (2.2lbs) of cotton yields 200g (7oz) of carbon – with just 2g (0.07oz) needed for each battery cell. The firm bought a shipment of cotton in 2017 and still hasn’t used all of it, says Okina.

In the batteries developed by the company, together with researchers at Kyushu University in Fukuoka, Japan, carbon is used for the anode – one of the two electrodes between which flow ions, the charged particles in batteries. Ions move in one direction when the battery is charging and in the other direction when it releases energy to a device. The majority of batteries use graphite as an anode but PJP Eye argues their approach is more sustainable, since they can make anodes using waste cotton from the textile industry.

With huge demand for batteries expected in the coming years, propelled by the rise of electric vehicles and large energy storage systems, some researchers and businesses are frantically developing possible alternatives to the lithium ion and graphite batteries that are commonplace today. Like PJP Eye, they argue we could be using much more sustainable and widely available materials for battery production.

https://www.bbc.com/future/article/20231108-batteries-of-the-future-how-cotton-and-seawater-might-power-our-devices

Worthy of Recognition and Praise – Jose Andres

Why is the media so focused on the most despicable, vile, self-serving garbage in society (rhymes with Melon Husk) when humble, dedicated people like Jose Andres actually works to help people in need?

Before chef José Andrés became famous for World Central Kitchen, he had already scaled the heights of his profession. His new cookbook celebrates the group’s humanitarian impact.

“I remember this Spanish guy screaming,” said chef-volunteer Karla Hoyos, describing the first time she met chef José Andrés. “He had just come from a meeting with FEMA [the US emergency management agency], and he was furious. And I thought, ‘Oh, no, no, nooo…’.” She shakes her head emphatically. “I am not going to deal with this person. I don’t care who he is.”

It was September 2017, shortly after Hoyos had arrived in Puerto Rico following Hurricane Maria, the deadly storm that devastated the island, killing nearly 3,000 people, making most roads impassable and knocking out 80% of the power grid. Several days earlier, Andrés had touched down with a team from his non-profit, World Central Kitchen (WCK), which he founded in 2010 after returning from Haiti where he fed survivors of a catastrophic earthquake. The organisation originally emphasised longer-term programmes – such as supporting nutritional training for young mothers – but after Maria, its efforts now focus on deploying an army of culinary first responders to feed people during and after the world’s worst disasters, natural or otherwise.

https://www.bbc.com/travel/article/20230911-jos-andrs-the-man-who-created-an-army-of-culinary-first-responders

Jotul GV 370 DV Won’t Light

This is a bit of a departure from the tech-oriented articles that I usually post but I thought I would just “put this out there” because I couldn’t find any relevant info when I had this issue with this product.

Basically, the Jotul GV 370 DV is a gas stove (the kind you use to heat a room, not to cook on). It was installed and minimally tested for gas flow. The problem was, it wouldn’t stay lit when we tried to use it for the first time. The pattern was, the pilot would light, 30-90 seconds would go by, the stove would flash combust (the gas in the chamber would ignite) rather forcefully but not stay on. This cycle would continue until a red LED on the front of the control unit started flashing. At this point, I turned off the stove and turned of the gas and started looking for answers.

I read and re-read the installation docs that came with the Jotul GV 370 DV. One thing I noticed is that the damper setting on it was not correct for the amount of vent pipe that was installed. That was an easy fix. Unfortunately, it didn’t solve the problem (or change it at all). Internet searches didn’t reveal any additional useful information, just a couple of edge cases and people arguing philosophy rather than practice solutions.

I even tried the justAnswers web site. Paid $5 for a ‘trial membership’ and was connected to an absolutely useless ‘expert’ who just tried to read me the online posts I had found via google. His final bit of ‘expert advice’ was to get a voltage meter, disassemble the stove and tell him what all the voltage readings were on all the stove components. Absolutely pointless exercise. I thanked him for wasting my time and requested a refund from justAnswers.

At this point, I elected to take the glass off the front of the unit and inspect the burning media bed to make sure the gas jets weren’t blocked or obstructed. This is when I noticed that the gas bed was out of alignment with the pilot starter. I removed the gas bed tray, re-seated it so that the notch for the pilot was centered on the pilot (instead of all the way to the right like it was when i originally opened it up). After this adjustment, I carefully placed the gas bed media back on the pan, reassembled the glass and turned the gas back on.

When I turned on the stove, the pilot came on, 30 seconds later the perimeter of the bed lit, went out, re-lit, then stayed on. Success! All that because of a one centimeter misalignment of the gas bed with the pilot.

So there you have it. Hopefully this will help someone else who has this issue quickly solve the problem without delay or ‘expert’ help.

To Microservice or Not

This article reinforces what I have been saying for years: microservices are a big mistake, especially for developers who don’t understand distributed systems, high availability and observability. To be successful, they must be properly designed and implemented, unlike most of the copy-and-paste, we-don’t-need-no-stinkin-design development that is seen today.

From the article:

We engineers have an affliction. It’s called “wanting to use the latest tech because it sounds cool, even though it’s technically more difficult.” Got that from the doctor’s office, it’s 100% legit. The diagnosis was written on my prescription for an over-the-counter monolith handbook. From 2004. Seriously though, we do this all the time. Every time something cool happens, we flock to it like moths to a campfire. And more often than not, we get burned.

AI Ethics Not Being Taught to Data Scientist

This feels like an extension of ethics, in general, not being part of the curriculum in education.

Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.

Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.

The study authors warned that this could have far-reaching consequences:

Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.

The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:

Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.

While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.

Self-Evolving Artificial Intelligence?

Teaching algorithms to create novel algorithms…

Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.

“While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.”

Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.

In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers’ imaginations and their existing biases.

So Quoc Le, a computer scientist at Google, and colleagues developed a program called AutoML-Zero that could develop AI programs with effectively zero human input, using only basic mathematical concepts a high school student would know. “Our ultimate goal is to actually develop novel machine learning concepts that even researchers could not find,” he says.

AI Transparency and Fairness

A post on efforts to further bolster AI transparency and fairness by the AI World Society.

Learning algorithms find patterns in data they are given. However, in the processes by which the data is collected, relevant variables are defined and hypotheses are formulated that may depend on structural unfairness found in society, the paper suggests.

“Algorithms based on such data could introduce or perpetuate a variety of discriminatory biases, thereby maintaining a cycle of injustice,” the authors state. “The community within statistics and machine learning that works on issues of fairness in data analysis have taken a variety of approaches to defining fairness formally, with the aim of ultimately ensuring that learning algorithms are fair.”

The paper poses some tough questions. For instance, “Since, unsurprisingly, learning algorithms that use unfair data can lead to biased or unfair conclusions, two questions immediately suggest themselves. First, what does it mean for a world and data that comes from this world to be fair? And second, if data is indeed unfair, what adjustments must be made to learning algorithms that use this data as input to produce fairer outputs?”

Cause and effect is a challenging area of statistics; correlation does not imply causation, the experts say. Teasing out causality often involved obtaining data in a carefully controlled way. An early example is the work done by James Lindt for the Royal Navy, when scurvy among sailors was a health crisis. Lindt organized what later came to be viewed as one of the first instances of a clinical trial. He arranged 12 sailors into six pairs, and gave each pair one of six scurvy treatments thought at the time to be effective. Of the treatments, only citrus was effective. That led to citrus products being issued on all Royal Navy ships.

Whether fairness can be defined by computer scientists and engineers is an open question. “Issues of fairness and justice have occupied the ethical, legal, and political literature for centuries. While many general principles are known, such as fairness-as-proportionality, just compensation, and social equality, general definitions have proven elusive,” the paper states.

Moreover, “Indeed, a general definition may not be possible since notions of fairness are ultimately rooted in either ethical principle or ethical intuition, and both principles and intuitions may conflict.”

Mediation analysis is one approach to making algorithms more fair. Needless to say, the work is continuing.

MLOps and DevOps

This article briefly outlines how as Machine Learning (ML) becomes more a part of corporate solutions, the need for MLOps is going to become more critical.

The term MLOps refers to a set of techniques and practises for data scientists to collaborate operations professionals.. MLOps aims to manage deployment of machine learning and deep learning models in large-scale production environments.

The term DevOps comes from the software engineering world and is concerned with developing and operating large-scale software systems. DevOps introduces two concepts: Continuous Integration (CI) and Continuous Delivery (CD). DevOps aims to shorten development cycles, increase deployment velocity and create dependable releases.

AI BS

or Artificial Intelligence Bull Shitake

There are a lot of claims being made, and as this article points out, not many of them are supported by strong evidence/math.

In Rebooting AI, Ernie Davis and I made six recommendations, each geared towards how readers – and journalists – and researchers might equally assess each new result that they achieve, asking the same set of questions in a limit section in the discussion of their papers:


Stripping away the rhetoric, what does the AI system actually do? Does a “reading system” really read?


How general is the result? (Could a driving system that works in Phoenix work as well in Mumbai? Would a Rubik’s cube system work in opening bottles? How much retraining would be required?)


Is there a demo where interested readers can probe for themselves?


If AI system is allegedly better than humans, then which humans, and how much better? (A comparison is low wage workers with little incentive to do well may not truly probe the limits of human ability)


How far does succeeding at the particular task actually take us toward building genuine AI?


How robust is the system? Could it work just as well with other data sets, without massive amounts of retraining? AlphaGo works fine on a 19×19 board, but would need to be retrained to play on a rectangular board; the lack of transfer is telling.

OpenCV Speed Cam

I really need to find the time to build this DIY speed cam. From my home office window, I have an excellent view of an intersection where I would estimate about 70% of the cars don’t even stop at the posted Stop sign. Further, I would guess that close to 90% of them are going faster than the 25 MPH speed limit. Data is good.

Computer vision itself isn’t anything new, but it has only recently reached a point where it’s practical for hobbyists to utilize. Part of that is because hardware has improved dramatically in recent years, but it also helps that good open-source machine learning and computer vision software has become available. More software options are becoming available, but OpenCV is one that has been around for a while now and is still one of the most popular. Over on PyImageSearch, Adrian Rosebrock has put together a tutorial that will walk you through how to detect vehicles and then track them to estimate the speed at which they’re traveling.

Rosebrock’s guide will show you how to make your very own DIY speed camera. But even if that isn’t something you have a need for, the tutorial is worth following just to learn some useful computer vision techniques. You could, for instance, modify this setup to count how many cars enter and exit a parking lot. This can be done with affordable and readily-available hardware, so the barrier to entry is low — perfect for the kind of project that is more of a learning experience than anything else.

Problems with AI Transparency

As more and more business decisions get handed over (sometime blindly) to computer algorithms (aka ‘AI’), companies are very late to the game in considering what the consequences of that delegation will yield. As a buffer against these consequences, a company may want to be more transparent about how it’s algorithms work but that is not without it’s challenges.

To start, companies attempting to utilize artificial intelligence need to recognize that there are costs associated with transparency. This is not, of course, to suggest that transparency isn’t worth achieving, simply that it also poses downsides that need to be fully understood. These costs should be incorporated into a broader risk model that governs how to engage with explainable models and the extent to which
information about the model is available to others.

Second, organizations must also recognize that security is becoming an increasing concern in the world of AI. As AI is adopted more widely, more security vulnerabilities and bugs will surely be discovered, as my colleagues and I at the Future of Privacy Forum recently argued. Indeed, security may be one of the biggest long-term barriers to the adoption of AI.

Reading Is Good For Your Brain

Science has found that reading is essential for a healthy brain. We already know reading is good for children’s developing noggins: A study of twins at the University of California at Berkeley found that kids who started reading at an earlier age went on to perform better on certain intelligence tests, such as analyses of their vocabulary size.


Other studies show that reading continues to develop the brains of adults. One 2012 Stanford University study, where people read passages of Jane Austen while inside an MRI, indicates that different types of reading exercise different parts of your brain. As you get older, another study suggests, reading might help slow down or even halt cognitive decline.Science has found that reading is essential for a healthy brain. We already know reading is good for children’s developing noggins: A study of twins at the University of California at Berkeley found that kids who started reading at an earlier age went on to perform better on certain intelligence tests, such as analyses of their vocabulary size.


Other studies show that reading continues to develop the brains of adults. One 2012 Stanford University study, where people read passages of Jane Austen while inside an MRI, indicates that different types of reading exercise different parts of your brain. As you get older, another study suggests, reading might help slow down or even halt cognitive decline.

https://www.popsci.com/read-more-books

 

And it doesn’t seem to matter if it is a physical book, an e-reader or an audio book (although the audio book has a slightly different impact on the brain).

 

As for audiobooks, the research so far has found that they stimulate the brain just as deeply as black-and-white pages, although they affect your gray matter somewhat differently. Because you’re listening to a story, you’re using different methods to decode and comprehend it. With print books, you need to provide the voice, called the prosody—you’re imagining the “tune and rhythm of speech,” the intonation, the stress on certain syllables, and so. With audio, the voice actor provides that information for you, so your brain isn’t generating the prosody itself, but rather working to understand the prosody in your ears.

Teaching AI To Be ‘Smarter’ By Doubting Itself

Interesting post that suggests that in deep learning algorithms, questioning things may lead to higher quality conclusions.

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

Deep learning, which involves feeding example data to a large and powerful neural network, has been an enormous success over the past few years, enabling machines to recognize objects in images or transcribe speech almost perfectly. But it requires lots of training data and computing power, and it can be surprisingly brittle.

Somewhat counterintuitively, this self-doubt offers one fix. The new approach could be useful in critical scenarios involving self-driving cars and other autonomous machines.

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

What Does An Enterprise Architect Do?

‘Enterprise Architect’ is a very fashionable title these days which causes a bit of confusion (and consternation) for actual EA practitioners. Typically, this title is attached to the role of someone who has deep technical knowledge about a given technology/application/suite. This is not an Enterprise Architect.

This article does a great job of clarifying what Enterprise Architecture is and what an Architect does (or should do).

“Before answering that question, it is important to note that no architecture is a solution. Often people confuse a solution, such as corporate infrastructure, as the architecture. This is an all too common mistake and very misleading. An architecture guided the development of the infrastructure, the infrastructure is a solution – not the architecture.

“The architect’s role isn’t to create solutions. Rather the architect’s role is to inform decision-makers and guide development of solutions based on understanding business drivers and needs of the organization. Although a single person can have both a role as an architect and a developer. The architect typically takes a broader and material independent view than the developer, yet leaves much downstream detail to the development community.

“So, since architecture is not a solution what is it? It is a package of information that describes what is needed to achieve a given result and what it might look like in a future state if implemented. In order for an architecture to be effective, that is for it to be realized in solutions, it must guide decisions.

“Any good architecture addresses a well-defined scope and seeks to achieve specified goals. For example, an architecture for a back-office software suite will seek to enable improvements to back office operations, an architecture for a department network will enable department interconnectivity, an architecture for corporate infrastructure will address needed services throughout at lower costs, etc. For each scope there are decision-makers that can either accept or reject the guidance from the architect such as office managers, network managers, the head of IT, etc.

“Those that deal with Enterprise Architecture take the broadest view, deal with issues that are oftentimes beyond even the corporate level, and are most effective when they influence corporate or Board level decision-makers.

Unpredictions for Artificial Intelligence (AI)

This post is a refreshing counterpoint to the breathless ‘AI will take over everything’ reporting that is increasingly common of late.

Self-driving cars
The first area is that “we won’t be riding in self-driving cars”. As Dr. Reddy explains: “While many are predicting a driverless future, we’re a long ‘road’ away from autonomous vehicles.” This is is terms of cars that will take commuters to work, a situation where the commuter can sit back and read his or her iPad while paying little attention to the traffic outside.
He adds: “For a number of years ahead, human operators and oversight will still rule the roads, because the discrete human judgments that are essential while driving will still require a person with all of his or her faculties — and the attendant liability for when mistakes happen. Besides technical challenges, humans tend to be more forgiving about mistakes made by human intelligence as opposed to those made by artificial intelligence.”
Disappearing jobs
The second ‘unprediction’ is that people will not be replaced by AI bots this year. Dr. Reddy states: “While it is possible that artificial intelligence agents might replace (but more likely supplement) certain administrative tasks, the reality is that worker displacement by AI is over-hyped and unlikely.” So robots won’t be taking over most jobs any time soon.
This is because, the analyst states: “Even in an environment where Automated Machine Learning is helping machines to build machines through deep learning, the really complex aspects of jobs will not be replaced. Thus, while AI will help automate various tasks that mostly we don’t want to do anyway, we’ll still need the human knowledge workers for thinking, judgment and creativity. But, routine tasks beware: AI is coming for you!”
Medical diagnosis
The third aspect is that we won’t get AI-powered medical diagnoses. This is, Dr. Reddy says “Due to a lack of training data and continued challenges around learning diagnosis and prognosis decision-making through identifying patterns, AI algorithms are not very good at medical decision automation and will only be used on a limited basis to support but not replace diagnosis and treatment recommendations by humans.”
He adds: “AI will be increasingly deployed against sporadic research needs in the medical arena, but, as with fraud detection, pattern recognition by machines only goes so far, and human insight, ingenuity and judgment come into play. People are still better than machines at learning patterns and developing intuition about new approaches.”
Importantly: “People are still better than machines at learning patterns and developing intuition about new approaches.”
AI at work
The fourth and final area is that we will still struggle with determining where artificial intelligence should be deployed. Dr. Reddy states: “Despite what you might be hearing from AI solution vendors, businesses that want to adopt AI must first conduct a careful needs assessment. As part of this process, companies also must gain a realistic view of what benefits are being sought and how AI can be strategically deployed for maximum benefit.”
The analyst adds: “IT management, business users and developers should avoid being overly ambitious and carefully assess the infrastructure and data required to drive value from AI. Best practices and ‘buy versus build’ analysis also should be part of the conversations about implementing AI applications.”

What Is Missing From Big Data

This is an excellent TEDTalk on what is missing from bigdata (hint: it is the human element).

Why do so many companies make bad decisions, even with access to unprecedented amounts of data? With stories from Nokia to Netflix to the oracles of ancient Greece, Tricia Wang demystifies big data and identifies its pitfalls, suggesting that we focus instead on “thick data” — precious, unquantifiable insights from actual people — to make the right business decisions and thrive in the unknown.

An interesting (but not too surprising) stat from the intro is that 73% of all bigdata projects deliver no value.

Turn Off All Your Push Notifications

Really? Someone had to write a 500 word ‘article‘ about what should be common sense?

There’s a solution, though: Kill your notifications. Yes, really. Turn them all off. (You can leave on phone calls and text messages, if you must, but nothing else.) You’ll discover that you don’t miss the stream of cards filling your lockscreen, because they never existed for your benefit. They’re for brands and developers, methods by which thirsty growth hackers can grab your attention anytime they want. Allowing an app to send you push notifications is like allowing a store clerk to grab you by the ear and drag you into their store. You’re letting someone insert a commercial into your life anytime they want. Time to turn it off.