After being away for a while, now Interesting stuffs is back. Exciting things I learned and read during the week (12 Sept – 16 Sept):
“What’s impressive about DALL-E 2 isn’t just the art it generates. It’s how it generates art. These aren’t composites made out of existing internet images — they’re wholly new creations made through a complex A.I. process known as “diffusion,” which starts with a random series of pixels and refines it repeatedly until it matches a given text description. And it’s improving quickly — DALL-E 2’s images are four times as detailed as the images generated by the original DALL-E, which was introduced only last year.”
“Over the past 10 years — a period some A.I. researchers have begun referring to as a “golden decade” — there’s been a wave of progress in many areas of A.I. research, fueled by the rise of techniques like deep learning and the advent of specialized hardware for running huge, computationally intensive A.I. models.”
“This summer, DeepMind announced that AlphaFold had made predictions for nearly all of the 200 million proteins known to exist — producing a treasure trove of data that will help medical researchers develop new drugs and vaccines for years to come. Last year, the journal Science recognized AlphaFold’s importance, naming it the biggest scientific breakthrough of the year.”
“But now, large language models like OpenAI’s GPT-3 are being used to write screenplays, compose marketing emails and develop video games. (I even used GPT-3 to write a book review for this paper last year — and, had I not clued in my editors beforehand, I doubt they would have suspected anything.)”
“A.I. is writing code, too — more than a million people have signed up to use GitHub’s Copilot, a tool released last year that helps programmers work faster by automatically finishing their code snippets.”
“In fact, many experts will tell you that A.I. is getting better at lots of things these days — even in areas, such as language and reasoning, where it once seemed that humans had the upper hand.”
“There are, to be fair, plenty of skeptics who say claims of A.I. progress are overblown. They’ll tell you that A.I. is still nowhere close to becoming sentient, or replacing humans in a wide variety of jobs. They’ll say that models like GPT-3 and LaMDA are just glorified parrots, blindly regurgitating their training data, and that we’re still decades away from creating true A.G.I. — artificial general intelligence — that is capable of “thinking” for itself.”
“Our online interactions could become stranger and more fraught, as we struggle to figure out which of our conversational partners are human and which are convincing bots. “
“Third, the news media needs to do a better job of explaining A.I. progress to nonexperts. Too often, journalists — and I admit I’ve been a guilty party here — rely on outdated sci-fi shorthand to translate what’s happening in A.I. to a general audience. We sometimes compare large language models to Skynet and HAL 9000, and flatten promising machine learning breakthroughs to panicky “The robots are coming!” headlines that we think will resonate with readers. Occasionally, we betray our ignorance by illustrating articles about software-based A.I. models with photos of hardware-based factory robots — an error that is as inexplicable as slapping a photo of a BMW on a story about bicycles.”
2. A Robot Wrote This Book Review
“…It makes me wish that someone out there would crank out a comprehensive survey text on AI, one that’s laser-focused on the technical issues, written by industry mavens who are actually doing this stuff day in and day out, and is written in an engaging, clear, plain-spoken style.“
Wow!
3. Why Ethereum is switching to proof of stake and how it will work
“Ethereum’s proponents claim that a key advantage proof of stake offers over proof of work is an economic incentive to play by the rules. If a node validates bad transactions or blocks, the validators face “slashing,” which means all their ether are “burned.” (When coins are burned, they are sent to an unusable wallet address where nobody has access to the key, rendering them effectively useless forever.)”
“Proponents also claim that proof of stake is more secure than proof of work. To attack a proof-of-work chain, you must have more than half the computing power in the network. In contrast, with proof of stake, you must control more than half the coins in the system. As with proof of work, this is difficult but not impossible to achieve.”
“And though staking is not as directly damaging to the planet as warehouses full of computer systems, critics point out that proof of stake is no more effective than proof of work at maintaining decentralization. Those who stake the most money make the most money.”
“Proof of stake also hasn’t been proven on the scale that proof-of-work platforms have. Bitcoin has been around for over a decade. Several other chains use proof of stake—Algorand, Cardano, Tezos—but these are tiny projects compared with Ethereum. So new vulnerabilities could surface once the new system is in wide release.”
Ethereum needs to move to proof of stake so it doesn’t further exacerbate the environmental horrors of Bitcoin. The question is, will its new system fulfill all the promises made for proof of stake? And how decentralized will it really be? If a public blockchain isn’t decentralized, what is the point of proof of anything? You end up doing all that work—consuming vast amounts of energy or staking all those coins—for nothing other than maintaining an illusion.
4. Could this be a glimpse into life in the 2030s?
5. Simple models predict behavior at least as well as behavioral scientists
“We compared the behavioral scientists’ predictions to random chance, linear models, and simple heuristics like “behavioral interventions have no effect” and “all published psychology research is false.”
“Behavioral scientists’ predictions are not only noisy but also biased. They systematically overestimate how well behavioral science “works”: overestimating the effectiveness of behavioral interventions, the impact of psychological phenomena like time discounting, and the replicability of published psychology research”