-
Notifications
You must be signed in to change notification settings - Fork 50
Workshop: "AI Agents & the Web" #507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We need to make sure this workshop and the "Smart Agents" workshop are well differentiated. |
As further references for topic 3 (possibly also 4), I would like to add that the Eclipse LMOS project, which is about building and running multi-agent systems, is already using 2 W3C RECs from the WoT WG, mentioned at https://eclipse.dev/lmos/docs/lmos_protocol/agent_description/ and https://eclipse.dev/lmos/docs/lmos_protocol/discovery/. Additionally, Agent Network Protocol is using DIDs. |
W3C definitely needs to collectively assess where LLM-style AI fits in with the web and the threats/opportunities it creates. |
that's more or less what we're trying to achieve with a workshop, starting with this very issue. W3C Workshops don't have to be "academic-style" - I don't think the workshops I have organized over the years were particularly academic in style (with my limited knowledge of what academic workshops look like) |
Whatever it's called -- task force, workshop -- I suggest it should be:
|
The WebAgents CG has recently launched an Interoperability Task Force whose work should help address some of the points. We are bootstrapping a living report that we will continuously update as the work develops. We have a rough consensus on the report's scope and structure, and we'll start adding content over the next 2-3 weeks. The report currently aims to clarify:
Other notes:
We are happy to receive your feedback on how to make this effort most useful. |
Another question a workshop in this space needs to answer is how we can ensure that any AI Agents support the Web. If they only extract value, people will either stop writing websites, or they'll build Markov pits for the Agents to fall into. Note that money isn't the only support that worthwhile websites need. |
Should we add a list of "potential hazard due to use the AI on the Web" to help encourage submissions of problem statements? |
A note on terminology. Generative AI and AI agents act as a new intermediating layer between publishers and end users, they don't disintermediate. |
indeed, I've amended the text accordingly |
I am the initiator of the ANP open-source community. ANP (Agent Network Protocol) is an open-source protocol designed for intelligent agents, enabling communication between them. Our goal is to build an open, secure, and efficient Agentic web. ANP github URL: https://github.com/agent-network-protocol/AgentNetworkProtocol |
Yes, the Agent Network Protocol uses DIDs. We have studied most identity solutions, including OAuth, API keys, and blockchain, and found that DIDs are the most suitable identity authentication method for intelligent agents. We also use VCs and JSON-LD. |
Dear Gaowei, could share why you believe DID/VC is the most suitable identity solution for AI agents? This choice is very important. Thomas |
I previously wrote a blog post comparing the pros and cons of DID, OpenID Connect, and API Keys in the context of agent interactions. |
Thanks! I agree with your analysis. It's expected that there going to have billions of AI agents over Internet and even more. Under this scale level, simplicity and security are very important and need to be balanced well. The DID/VC mode would be the best way till now. But we need much wider commercailization. W3C needs to speedup the related standardization. |
Dear W3C Workshop Organizers, I'm a core contributor and Tech Lead for CAMEL-AI. We are very interested in contributing to Topic 1: AI Agents in the Web Ecosystem. CAMEL-AI (https://www.camel-ai.org/) is an open-source research community dedicated to building foundational infrastructure for AI agents. We focus on collaborative, tool-using agents designed to operate in complex environments, including interacting with web browsers, APIs, and other web-based tools. Our work aims to empower researchers and developers to rapidly build, experiment with, and understand multi-agent systems. Looking forward to further discussion! |
Thanks for organizing this Ruoxi and Dom. I worry the current text is not scoped enough to lead to productive conversation in a workshop. It might be helpful to include a tangible outcome of the workshop to focus more on progress than academic-style conversation. One idea would be that perhaps the workshop could result in a whitepaper focused on various sections like landscape, benefits, risks, challenges, and open questions related to AI LLMs and agents interacting with the web. The focus would be on the web, not a single company or technology. The whitepaper would be reviewed by various W3 groups including AB and TAG for eventual publishing publicly to clearly provide a web perspective on this important topic. |
@plehegar wrote:
I'd suggest focusing on the hazards that actual LLM AI agents are already creating. You don't need a lot of "problem statements" you need to identify concrete problems that the W3C community can collectively address. I'd also suggest trying to discourage "solutions looking for a problem" submissions from those hoping to get market credibility from a W3C standardization effort. This field is moving MUCH FASTER than W3C has ever managed to operate; as I understand it (disclaimer: as a retired, unaffiliated, non-expert) Anthropic's MCP has become the de-facto standard LLM agent API less than six months after its introduction. Even more than during the current web platform's heyday, industry standards came from INDUSTRIES, with standards organizations coming later along to clarify / test / certify. The primary goal of a workshop MUST be to attract the people doing actual innovating and deployment in whatever focus area you choose, and build a community of people who want to find common ground to address common concerns. @anolan4 wrote:
+1000. I agree with @dontcallmedom that he has organized some successful workshops, but they were FOCUSED on CONCRETE problems, and got REAL EXPERTS together to share thoughts on how to address them. The current draft has more of the flavor of much earlier W3C "academic-style" workshops that shared diverse perspectives but didn't end with clear consensus on how to tackle actual problems. |
Uh oh!
There was an error while loading. Please reload this page.
NOTE: This is a tentative workshop draft — Dom and I are working on a CFP. This document and this issue may be updated, and we would like to hear your opinions in the meantime.
Introduction
This W3C Workshop aims to gather stakeholders to discuss AI Agents and identify potential areas for standardization. The workshop will explore AI Agents in the Web Ecosystem, Inter-Agent Communication, Use Cases and Requirements for AI Agents, and discuss the needs and opportunities for Web standardization to support AI Agents and a possible future Agentic web.
Possible topics:
This topic investigates how AI agents can be integrated into browsers and web applications, and explores the long-term trajectory of AI agents evolving into autonomous user agents or operating systems.
How should AI agents relate to web browsers and web applications? What level of integration if any needs to be considered between them?
How might the presence of AI agents reshape user interaction paradigms and information flows on the Web? What risks to the Web ecosystem might AI agents adding a new intermediation layer to content create for publishers?
How might AI agents evolve into user agents or operating systems, functioning as next-generation browsers or operating systems?
What APIs (if any) would be needed to support interaction between web applications and AI agents? What inspiration to draw from ongoing development of LLM/browser interaction APIs
To enable a distributed and open ecosystem of AI agents, mechanisms for agents to discover, understand, and collaborate with each other may be needed, and the Web would be a natural candidate for that role.
How can agents reliably identify and locate other agents across services and domains?
How can collaboration protocols enable agents to exchange intent, task delegation, capability information, and negotiate roles, to enable agents' automated collaboration?
How should authentication, authorization, access control, and encryption be handled in agent-to-agent communication?
To the extent that agents act on behalf of end users, what privacy and security mechanisms need to be built into agent communication?
References on this topic:
With real-world applications of AI Agents already emerging, understanding the value and limitations of current approaches to AI on the Web is critical to manage their impact on the future of the Web.
How are agents being used today in contexts such as personal productivity, accessibility, Web search, or automation?
What positive and negative impact are these new agents having on content and service production and delivery on the Web?
References on this topic:
As emerging new consumers and likely producers of Web content and services, AI agents may raise new demands for interoperability with standardized foundations.
Where existing Web standards already support the development and deployment of AI agents, are there changes in how we produce and review these specifications to ensure a better fit with that ecosystem?
How can agent frameworks align with or extend ongoing specification efforts?
Where do current W3C standards fall short in enabling agentic behaviors, and where are new specifications needed?
What role (if any) should W3C play in standardization communication and coordination of AI Agents?
Possible outcomes:
Meeting format: hybrid (tentatively)
[cc'ing @dontcallmedom @plehegar ]
The text was updated successfully, but these errors were encountered: