Discussion about this post

User's avatar
yegg's avatar

I think the FDA idea is interesting. In terms of your taxonomy though, it strikes me as potentially one level down of a 'regulation' umbrella category where an FDA-style could be one option, but more generally regulation should be based on some underlying evidence/science. In any case, I agree that we reach too quickly for the other categories, which all have sub-optimal longterm outcomes relative to regulation done well. One issue though with regulation is, at least in the U.S., it is hardly ever done well, usually taking way, way too long to get something passed, and then another generation before it is revisited. That's why I do think in this category some kind of agency makes more sense (like your proposal) since they have contiinous administrative oversight to adapt to new science. That said, the FDA style is just one approach. For example, the FTC often takes the approach of laying out rules but not requiring explicit approval, which could still be evidence based.

Matthew Allaire's avatar

This is an excellent topography of the myriad approaches to government led tech reform! I’m curious if the threat of litigation might proactively incentivize safety-first design of AI chatbots absent direct legal action? (e.g. OpenAI parental controls, kids safety AI evals, Google’s child-proof model offerings)

2 more comments...

No posts

Ready for more?