I think the FDA idea is interesting. In terms of your taxonomy though, it strikes me as potentially one level down of a 'regulation' umbrella category where an FDA-style could be one option, but more generally regulation should be based on some underlying evidence/science. In any case, I agree that we reach too quickly for the other categories, which all have sub-optimal longterm outcomes relative to regulation done well. One issue though with regulation is, at least in the U.S., it is hardly ever done well, usually taking way, way too long to get something passed, and then another generation before it is revisited. That's why I do think in this category some kind of agency makes more sense (like your proposal) since they have contiinous administrative oversight to adapt to new science. That said, the FDA style is just one approach. For example, the FTC often takes the approach of laying out rules but not requiring explicit approval, which could still be evidence based.
Generally, I am part of the growing consensus that we've over-regulated the offline world of atoms to our detriment ... and under-regulated the online world of bits, also to our detriment.
But I agree with your broader point and am often concerned about regulatory capture. It’s wild how many states require expensive licenses for things like cutting hair, taxis, interior design, and even guiding tours or selling flowers! I worry that some proposed AI bills could favor incumbents and hurt startups. An FDA-style clearance process could create anti-competitive cost and compliance burdens if it’s applied too broadly or designed poorly.
I also think you’re right that “FDA-style clearance” is one option inside a wider “evidence-based regulation” bucket. I should have included a fifth category of FTC-style guidance, where the agency sets standards and expectations without pre-approval.
I appreciated Tutt’s “FDA for Algorithms” proposal on risk-tiering: guidance and labeling for medium-risk products, and clearance only for high-risk products. Guidance would establish pre-specified safety standards, and clearance would establish pre-specified outcomes to guide researchers and incentivize evidence generation.
This is an excellent topography of the myriad approaches to government led tech reform! I’m curious if the threat of litigation might proactively incentivize safety-first design of AI chatbots absent direct legal action? (e.g. OpenAI parental controls, kids safety AI evals, Google’s child-proof model offerings)
I think that's a great point about the threat of litigation as a design incentive. I imagine it depends on the company and product. For a company like BetHog (mentioned in the piece) they don't seem to be concerned about the threat of litigation. Pressure from a ballot initiative could be another possible path to proactively designing for well-being.
I think the FDA idea is interesting. In terms of your taxonomy though, it strikes me as potentially one level down of a 'regulation' umbrella category where an FDA-style could be one option, but more generally regulation should be based on some underlying evidence/science. In any case, I agree that we reach too quickly for the other categories, which all have sub-optimal longterm outcomes relative to regulation done well. One issue though with regulation is, at least in the U.S., it is hardly ever done well, usually taking way, way too long to get something passed, and then another generation before it is revisited. That's why I do think in this category some kind of agency makes more sense (like your proposal) since they have contiinous administrative oversight to adapt to new science. That said, the FDA style is just one approach. For example, the FTC often takes the approach of laying out rules but not requiring explicit approval, which could still be evidence based.
Generally, I am part of the growing consensus that we've over-regulated the offline world of atoms to our detriment ... and under-regulated the online world of bits, also to our detriment.
But I agree with your broader point and am often concerned about regulatory capture. It’s wild how many states require expensive licenses for things like cutting hair, taxis, interior design, and even guiding tours or selling flowers! I worry that some proposed AI bills could favor incumbents and hurt startups. An FDA-style clearance process could create anti-competitive cost and compliance burdens if it’s applied too broadly or designed poorly.
I also think you’re right that “FDA-style clearance” is one option inside a wider “evidence-based regulation” bucket. I should have included a fifth category of FTC-style guidance, where the agency sets standards and expectations without pre-approval.
I appreciated Tutt’s “FDA for Algorithms” proposal on risk-tiering: guidance and labeling for medium-risk products, and clearance only for high-risk products. Guidance would establish pre-specified safety standards, and clearance would establish pre-specified outcomes to guide researchers and incentivize evidence generation.
This is an excellent topography of the myriad approaches to government led tech reform! I’m curious if the threat of litigation might proactively incentivize safety-first design of AI chatbots absent direct legal action? (e.g. OpenAI parental controls, kids safety AI evals, Google’s child-proof model offerings)
I think that's a great point about the threat of litigation as a design incentive. I imagine it depends on the company and product. For a company like BetHog (mentioned in the piece) they don't seem to be concerned about the threat of litigation. Pressure from a ballot initiative could be another possible path to proactively designing for well-being.