24-hour tracking of blockchain industry news and in-depth article analysis
Original title: "d/acc: one year later"
Written by: Vitalik Buterin, Founder of Ethereum
Compiled by Leek, Foresight News
Abstract: This article focuses on the concept of decentralized acceleration (d/acc), and explores its application in technological development and the challenges it faces, including AI security and regulation, its relationship with cryptocurrency, and public goods funding. It emphasizes the importance of d/acc in building a safer and better world, as well as the opportunities and challenges of future development. The author elaborates on the connotation of d/acc, analyzes its role in coping with AI risks by comparing different strategies, and discusses the value of cryptocurrency and the exploration of public goods funding mechanisms. Finally, the author looks forward to the future of technological development. Although there are challenges, humans still have the opportunity to build a better world with existing tools and concepts.
Special thanks to volunteers Liraz Siri, Janine Leger and Balvi for their feedback and reviews.
About a year ago, I wrote an essay about technological optimism, in which I described my general enthusiasm for technology and the enormous benefits it could bring, but also expressed my caution on some specific issues, focusing primarily on superintelligent AI and the risks of destruction or irreversible loss of human power that could result if this technology was not built properly.
One of my core points in that article was the idea of decentralized, democratic, and differentiated defensive acceleration. Accelerating technology, but focusing on technologies that improve our defenses rather than causing harm, and decentralizing power rather than concentrating it in the hands of a few elites who can judge right and wrong on behalf of everyone. The model of defense should be like democratic Switzerland and the historically quasi-anarchic Zomia, not the model represented by lords and castles under medieval feudalism.
In the year since then, these ideas have grown and matured significantly. I’ve shared them on 80,000 Hours, an organization focused on career choice, and have received a lot of response, mostly positive, but also some critical.
The work itself continues to advance and produce tangible results: we’ve seen progress in the field of verifiable open source vaccines; people’s understanding of the value of healthy indoor air continues to deepen; “Community Notes” continue to play a positive role; prediction markets have had a breakthrough year as an information tool; zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARKs) have been applied in the fields of government identification and social media (and Ethereum wallets are secured through account abstraction); open source imaging tools have been applied in the fields of medicine and brain-computer interfaces (BCI), and so on.
Last fall, we had our first major d/acc event: d/acc Discovery Day (d/aDDy) at Devcon, a full day of speakers from all of the d/acc pillars (biological, physical, cyber, cyber defense, and neurotech). Over the years, people working on these technologies have become more aware of each other’s work, and outsiders have become more aware of the greater vision: that the same values that drive Ethereum and crypto can be extended to the wider world.
Fast forward to 2042. You see a news report in the media about a possible new outbreak in your city. You are used to seeing this kind of news: people tend to overreact to every animal disease mutation, which in the vast majority of cases never turns out to be an actual crisis. Two previous potential outbreaks were detected early and nipped in the bud through wastewater monitoring and open-source analysis of social media. This time, however, is different: prediction markets indicate a 60% chance of at least 10,000 cases, and this worries you.
Just yesterday, the genetic sequence of the virus was determined. A software update for the air tester in your pocket was released shortly afterwards, enabling it to detect the new virus (either from a single breath or after 15 minutes of exposure to room air in a room). Meanwhile, open-source instructions and code to generate a vaccine using equipment accessible to any modern medical facility in the world are expected to be released within weeks. Most people are not taking any action yet, relying primarily on widespread air filtration and ventilation measures to keep themselves safe.
Because you have immune problems, you act more cautiously: the open source local personal assistant AI you use, in addition to taking on routine tasks such as navigation, restaurant and activity recommendations, will also take into account real-time air test data and CO2 data to recommend only the safest places. This data is provided by thousands of participants and devices, and with the help of ZK-SNARKs and differential privacy technology, the risk of data being leaked or misused for other purposes is minimized (if you are willing to contribute data to these datasets, other personal assistant AIs will verify that these encryption tools are indeed effective).
Two months later, the epidemic miraculously dissipated: it seemed that 60% of the population followed the basic epidemic prevention protocol, namely wearing masks when the air tester sounded the alarm and indicated the presence of the virus, and isolating at home if the individual tested positive. This measure was enough to further reduce the transmission rate, which was already greatly reduced due to passive high-power air filtration, to below 1. A disease that simulations showed could be five times more serious than the new coronavirus 20 years ago, has not caused serious impacts today.
One extremely positive outcome of the d/acc event at Devcon was that the d/acc concept successfully brought people from different fields together and really sparked their interest in each other's work.
It's easy to host an event that's "diverse," but it's hard to actually connect people from different backgrounds and interests. I still remember being forced to sit through long operas in middle and high school that I personally found boring. I knew I was "supposed" to enjoy them because if I didn't I'd be seen as an uncultured computer science slacker, but I just didn't connect with the content on a deeper level. The vibe at d/acc day was completely different: it felt like people genuinely enjoyed learning about all kinds of work in different fields.
This kind of broad coalition-building is necessary if we are to build a future that is better than domination, deceleration, and destruction. The fact that d/acc seems to be doing so well is a reminder of the value of this idea.
The core idea of d/acc is simple and clear: decentralized, democratic, and differentiated defensive acceleration. Build technology that can tip the balance of attack and defense toward defense, and do so without relying on giving more power to a central authority. These two aspects are inherently closely linked: any decentralized, democratic, or liberal political structure tends to thrive when defense is easy to implement, and struggles when defense is difficult – in those cases, the more likely outcome is a chaotic period of everyone against everyone, and eventually a state of equilibrium where the strongest rule.
One way to understand the significance of trying to achieve decentralization, defensibility, and acceleration simultaneously is to contrast it with the ideas that arise from abandoning any one of these three aspects.
Decentralization is accelerating, but the "differentiated defense" part is ignored
Essentially, this is akin to being an effective accelerationist (e/acc), but pursuing decentralization at the same time. There are many people who take this approach, some of whom call themselves d/acc, but who helpfully describe their focus as "offense." There are many others who express more modest enthusiasm for "decentralized AI" and similar topics, but who, in my opinion, pay significantly less attention to the "defense" side.
In my view, this approach may avoid the risk of a particular group of people exercising dictatorial control over the global human race, but it fails to address the underlying structural problem: in an environment that favors offense, there is always a constant risk of disaster, or that someone will position themselves as protector and permanently dominate. In the case of AI, it also fails to properly address the risk of humanity as a whole being disempowered relative to AI.
Differentiated defense accelerates, but ignores "decentralization and democracy"
Accepting centralized control to achieve security goals will always have a certain appeal for some, and readers are no doubt familiar with many examples of this, as well as the drawbacks they entail. Recently, some have worried that extreme centralized control may be the only way to deal with the extreme technologies of the future: for example, imagine a hypothetical scenario in which “everyone wears a ‘freedom tag’ – a follow-up to today’s more limited wearable surveillance devices, similar to the ankle tags used as prison alternatives in several countries…encrypted video and audio are continuously uploaded and interpreted by machines in real time.” However, there is a question of degree to centralized control. A relatively mild form of centralized control that is often overlooked but still harmful is the resistance to public scrutiny in the biotech sector (e.g., food, vaccines), and the closed-source norms that allow this resistance to go unchallenged.
The risk of this approach is obvious: the center itself often becomes the source of risk. We have seen this during the COVID-19 pandemic, where gain-of-function research funded by multiple major world governments may have been the root cause of the pandemic, centralized epistemology led the World Health Organization to refuse for years to acknowledge that the coronavirus was airborne, and mandatory social distancing and vaccine mandates triggered a political backlash that may last for decades. Similar situations are likely to occur again in any risk scenario related to AI or other risky technologies. In contrast, a decentralized approach will be more effective in addressing risks from the center itself.
Decentralization defends, but exclusion accelerates
Essentially, this is an attempt to slow down technological progress or drive economic recession.
The challenge to this strategy is twofold. First, technology and economic growth are, on balance, so beneficial to humanity that any delay in them carries incalculable costs. Second, in a non-totalitarian world, stagnation is destabilizing: those who “cheat” the most, who can find plausible ways to keep moving forward, will prevail. Decelerationist strategies can work to a certain extent in certain contexts: the fact that European food is healthier than American food is one example; so is the success of nuclear nonproliferation so far. However, these strategies cannot work forever.
Through d/acc we strive to achieve the following goals:
Another way to think about d/acc is to return to a framework that emerged from the European Pirate Party movement of the late 2000s: empowerment.
Our goal is to build a world that preserves human agency, achieving negative freedom—preventing others (whether private citizens, governments, or superintelligent robots) from actively interfering with our ability to shape our own destinies—and positive freedom—ensuring that we have the knowledge and resources to exercise that ability. This echoes a centuries-old classical liberal tradition that ranges from Stewart Brand’s focus on “instrumental acquisition” to John Stuart Mill’s emphasis on education alongside freedom as key elements of human progress—perhaps supplemented by Buckminster Fuller’s vision of a global problem-solving process that is participatory and widely distributed. Given the technological landscape of the 21st century, we can think of d/acc as a way to achieve these same goals.
In my article last year, d/acc focused specifically on defensive technologies: physical defense, biological defense, cyber defense, and information defense. However, decentralized defense alone is not enough to build a great world: we also need a forward-looking, positive vision of what humanity can achieve with its newfound decentralization and security.
Last year's article did contain a positive vision in two respects:
1. In focusing on the challenges of superintelligence, I propose a path (not original to me) for how we can achieve superintelligence without losing power:
2. When talking about information defense, I also mentioned in passing that in addition to defensive social technologies designed to help communities maintain cohesion and engage in high-quality discussions in the face of attackers, there are also progressive social technologies that can help communities make high-quality judgments more easily: Pol.is is an example, as are prediction markets.
But at the time, both of these points felt disconnected from d/acc’s core argument: “Here are some ideas about building a more democratic, more defensible world at a fundamental level, and by the way, here are some unrelated ideas about how we might achieve superintelligence.”
However, I think in reality there are some crucial connections between the d/acc techniques labeled “defensive” and “progressive” above. Let’s expand on the d/acc chart from last year’s article by adding this axis to the chart (while relabeling it “survive vs. thrive”) and see what that looks like:
There is a consistent pattern across fields: the science, ideas, and tools that help us “survive” in a field are closely related to the science, ideas, and tools that help us “thrive.” Here are some specific examples:
Beyond this, there are important interdependencies between the different subject areas:
The most persuasive objection to my article last year came from the AI safety community. The argument went something like this: “Sure, if we had half a century to develop strong AI, we could focus on building all of these beneficial things. But in reality, it looks like we might have only three years to get to general AI, and another three to get to superintelligence. So if we don’t want to doom the world or otherwise get it into an irreversible mess, we can’t just accelerate the development of beneficial technologies; we must also slow down the development of harmful technologies, and that means strong regulations that might anger the powerful.” In my article last year, I really didn’t propose any specific strategies for “slowing down the development of harmful technologies,” other than a vague call to not build risky forms of superintelligence. So it’s worth addressing the question directly here: if we were in a worst-case scenario, with extremely high risks from AI and a timeline of perhaps only five years, what kind of regulation would I support?
Last year, the major AI regulatory proposal was California’s SB-1047. SB-1047 requires developers of the most powerful models (i.e., those that cost more than $100 million to train, or $10 million to fine-tune) to conduct a battery of security testing before release. In addition, AI model developers will be held accountable if they fail to exercise sufficient caution. Many critics have argued that the bill is a “threat to open source”; I disagree, as the cost threshold means it only affects the most powerful models: even a Llama3 model is likely below that threshold. In retrospect, however, I think the bill had a more serious problem: like most regulatory measures, it was overfit to the current state of affairs. The focus on training costs has proven to be fragile in the face of new technologies: the recent state-of-the-art DeepSeek v3 model cost only $6 million to train, and in new models like o1, costs are often shifted more from training to the inference phase.
In reality, the actors most likely to be responsible for AI superintelligence destruction scenarios are the military. As we have witnessed over the past half century in biosecurity (and earlier), militaries are willing to take some terrible actions, and they are extremely fallible. Today, AI military applications are developing rapidly (e.g., in Ukraine, Gaza). And any security regulation adopted by a government will, by default, exempt its own military and the companies that work closely with the military.
Still, these arguments are not reasons to sit idly by. Rather, they can serve as a guide to try to craft rules that raise the fewest of these concerns.
Strategy 1: Responsibility
If someone's actions in some way cause legally actionable harm, they can be prosecuted. This doesn't solve the problem of risk from the military and other "above the law" actors, but it is a very general approach that avoids overfitting, and for this reason is often supported by libertarian-leaning economists.
The main accountability objectives considered so far are as follows:
Putting responsibility on the user seems to be most incentive-aligned. Users determine how AI is used, although the connection between how a model is developed and how it is ultimately used is often unclear. Holding users accountable creates a strong pressure to use AI in what I believe is the right way: focusing on building mechanical suits for the human mind, rather than creating new self-sustaining intelligent life forms. The former will respond to user intent on a regular basis, and therefore will not lead to catastrophic actions unless the user wants them to. The latter, on the other hand, presents the greatest risk of getting out of control and triggering a classic “AI runaway” scenario. Another benefit of placing responsibility as close to the end user as possible is that it minimizes the risk that responsibility leads people to take actions that are otherwise harmful (e.g. closed source, know your customer (KYC) and surveillance, state/corporate collusion to secretly restrict users, such as banks refusing to serve certain customers, excluding large swaths of the world).
There is a classic objection to attributing liability solely to users: users are likely to be average individuals, without much money, and perhaps even anonymous, so that no one can realistically pay for catastrophic damage. This argument is probably overstated: even if some users are too small to be liable, the average customer of an AI developer is not, so AI developers will still be incentivized to build products that give users confidence that they will not face high liability risk. That said, it is still a valid point that needs to be addressed. You need to incentivize someone in the pipeline who has the resources to take appropriate care to do so, and deployers and developers are both easy targets who still have a lot of influence over the safety of the model.
Deployer liability seems reasonable. A common concern is that it doesn't work with an open source model, but that seems manageable, especially since the most powerful models are likely to be closed source (if they turn out to be open source, then while deployer liability might not end up being very useful, it won't do much harm either). The same concerns apply to developer liability (although with an open source model there's the hurdle of needing to tweak the model to do something it wouldn't otherwise be allowed to do), but the same counterarguments apply. As a general principle, imposing a "tax" on control that essentially says "you can build something you can't control, or you can build something you can control, but if you build something you can control, 20% of that control must be used for our purposes" seems like a reasonable position for the legal system to take.
One idea that seems to be underexplored is to place liability on other actors in the pipeline, who are more likely to have ample resources. An idea that fits well with the d/acc philosophy is to hold accountable the owners or operators of any device that an AI takes over (e.g., through hacking) in the process of performing some catastrophically harmful action. This would create a very broad incentive to work hard to make the infrastructure of the world (especially in computing and biology) as safe as possible.
Strategy 2: Global “soft pause” button on industrial-scale hardware
If I were convinced that we needed something “stronger” than liability rules, I would choose this strategy. The goal is to have the ability to reduce the world’s available computing power by about 90% – 99% during a critical period, for 1-2 years, to buy humanity more time to prepare. The value of 1-2 years should not be overestimated: a year of “wartime mode” in a complacent environment can easily be worth a hundred years of regular work. Ways to achieve a “pause” are already being explored, including some specific proposals such as requiring hardware registration and verified location.
A more advanced approach would be to use clever cryptographic tricks: for example, industrial-scale (but not consumer-grade) AI hardware could be equipped with a trusted hardware chip that would only allow it to continue running if it received 3/3 signatures from major international institutions (including at least one non-military affiliate) every week. These signatures would be device-independent (we could even require zero-knowledge proofs to be published on the blockchain if needed), so it would be all-or-nothing: there would be no practical way to authorize one device to continue running without authorizing all the others.
This seems to "fit the bill" in terms of maximizing benefits and minimizing risks:
Hardware regulation is already being strongly considered, though usually in the framework of export controls, which essentially have a "we trust our side, but not the other" philosophy. Leopold Aschenbrunner famously argued that the United States should race to gain a decisive advantage, then force China to sign an agreement limiting the amount of equipment they can run. This approach seems risky to me, and could combine the pitfalls of multipolar competition with centralization. If we have to restrict people, it seems better to restrict everyone equally and work to actually cooperate to organize implementation, rather than one side trying to dominate everyone.
Both strategies (liability and hardware pause button) have holes, and it is clear that they are only temporary stopgap measures: if something can be done on a supercomputer at time T, it is likely to be done on a laptop at time T + 5 years. Therefore, we need more stable measures to buy time. Many d/acc techniques are relevant here. We can think of the role of d/acc techniques as follows: If AI takes over the world, how will it do it?
As briefly mentioned above, liability rules are a natural regulatory fit with the d/acc philosophy, as they can be very effective in incentivizing adoption of these defenses around the world and taking them seriously. Taiwan has recently been experimenting with liability for false advertising, which can be seen as an example of using liability to encourage information defenses. We shouldn’t be too keen on imposing liability everywhere, and remember the benefits of ordinary freedoms in enabling the little guy to engage in innovation without fear of litigation, but where we do want to push for safety more forcefully, liability can be quite flexible and effective.
Many aspects of d/acc go far beyond typical blockchain topics: biosecurity, brain-computer interfaces, and collaborative discourse tools seem far removed from what crypto people usually talk about. However, I think there are some important connections between crypto and d/acc, in particular:
One problem I’ve long been interested in is coming up with better mechanisms for funding public goods: projects that are valuable to very large groups but don’t have a naturally accessible business model. My past work in this area includes my work on quadratic funding and its contributions to Gitcoin Grants, Retro Public Good Funding (retro PGF), and most recently Deep Funding.
Many people are skeptical about the concept of public goods. This skepticism usually comes from two aspects:
These are important criticisms, and valid ones. However, I believe that strong decentralized public goods funding is essential to the d/acc vision, because a key goal of d/acc (minimizing central points of control) is itself a hindrance to many traditional business models. It is possible to build successful businesses on open source—several Balvi grantees are doing so—but in some cases it is difficult enough that important projects require additional ongoing support. So we have to do the hard thing, which is to figure out how to do public goods funding in a way that addresses both of the above criticisms.
The solution to the first problem is fundamentally credible neutrality and decentralization. Central planning is problematic because it hands control to elites who can become abusive, and because it often overfits to current circumstances and becomes increasingly ineffective over time. Quadratic funding and similar mechanisms are precisely about funding public goods in a way that is as credibly neutral and (architecturally and politically) decentralized as possible.
The second problem is more challenging. A common criticism of quadratic funding is that it quickly turns into a popularity contest, requiring project funders to expend a lot of energy on public outreach. Furthermore, projects that are “in front of people’s eyes” (e.g., end-user applications) get funded, while projects that are more behind the scenes (the typical “dependencies maintained by a guy in Nebraska”) get no funding at all. Optimism retroactive funding relies on a smaller number of expert badge holders; here, the popularity contest effect is reduced, but the social effect of having close personal connections to badge holders is amplified.
Deep Funding is my latest effort to solve this problem. Deep Funding has two main innovations:
But deep funding is just the latest example; there have been other ideas for public goods funding mechanisms before, and there will be more in the future. allo.expert has done a good job cataloguing them. The fundamental goal is to create a social tool that can fund public goods with a level of accuracy, fairness, and open access that is at least close to that of markets funding private goods. It doesn't have to be perfect; after all, markets themselves are far from perfect. But it should work well enough that developers who work on high-quality open source projects that benefit everyone can continue to do so without feeling the need to make unacceptable compromises.
Today, most of the leading projects in d/acc topic areas: vaccines, BCIs, “edge BCIs” like wrist EMG and eye tracking, anti-aging drugs, hardware, etc., are proprietary projects. This has big disadvantages in terms of ensuring public trust, as we have already seen in many of the above areas. It also shifts attention to competitive dynamics (“our team must win this critical industry!”) and away from the greater competition of ensuring these technologies emerge quickly enough to protect us in a world of superintelligent AI. For these reasons, strong public goods funding can be a powerful promoter of openness and freedom. This is another way the cryptocurrency community can help d/acc: by making a serious effort to explore these funding mechanisms and make them work well in its own context, preparing for a broader adoption of open source science and technology.
The coming decades bring important challenges. I’ve been thinking about two of them lately:
However, each of these challenges has a silver lining. First, we now have very powerful tools to do the rest of our work much faster:
Second, now that many of the principles we hold dear are no longer held by a select few of the old powers, they can be reclaimed by a broad coalition that welcomes anyone in the world to join. This is probably the biggest benefit of the recent political “realignment” around the world, and it’s worth taking advantage of. Cryptocurrencies have done a great job of capitalizing on this and finding global appeal; d/acc can do the same.
Acquiring tools means we can adapt and improve our biology and our environment, and the “defense” part of d/acc means we can do this without infringing on the freedom of others to do the same. The principle of liberal pluralism means we can have great diversity in how we do this, and our commitment to a common human goal means it should be achieved.
We humans remain the brightest star. The task before us—to build a brighter 21st century, one that protects human survival, freedom, and agency as we reach for the stars—is a challenging one. But I believe we can do it.
创历届纪录!近300个项目及个人通过数据筛选、公开报名和社区推荐,进入本次评选投票阶段。谁是推动Web3和Crypto走向主流的先锋?点击图片参与投票,为你心目中的年度最佳助力!
点击下方图片立即投票!
Share to WeChat
a16z: Seven core trends in encryption in 2025
CoinShares: Global digital asset investment products will have a net inflow of $44.2 billion in 2024
Monad testnet is about to be launched. How can we participate in the ecological construction in advance?
KIP Protocol was invited by the Argentine capital city government to join its blockchain committee to help decentralized AI technology land in Latin America
Open launches #WEBisOpen points system to turn interactions into rewards
Can five catalysts help ETH turn around this year?
2024 is about to pass, and the crypto market has experienced major events and new narratives such as AI, inscriptions, Meme, the US election, and RWA. So what will the crypto market usher in in 2025?
From joke culture to the trillion-dollar race, Memecoin has become an integral part of the crypto market. In this Memecoin super cycle, how can we seize the opportunity?
In a bull market, opportunities are everywhere, but at the same time, there are also many risks such as "project absconding", "hackers stealing money", "leverage liquidation", etc. Therefore, we should be cautious in the bull market and learn to survive.WeChat Official Account
Contact Us