It would be nice if more businesses embraced email instead of requiring phone calls for basic tasks. Imagine how much more productive we could be if we could just send off a quick email with the information and questions.
Instead, what we're likely going to get are "voice agents" calling each other when we could have just used email instead...
Businesses likely don't know a better way because the person selling them software doesn't want them to use an open and federated technology. They want the business to use Slack, with a SalesForce CRM, and then add a JIRA workflow to top it off.
Most of the time it's simply not being aware of what's out there or just showing them a different work flow.
Anyone can generate an alternative chain of sha256 hashes. perhaps you should consider timestamping, e.g. https://opentimestamps.org/ As for what the regulation says, I haven't looked but perhaps it doesn't require the system to be actually tamper-proof.
The library is deliberately scoped as tamper-evident, not tamper-proof; it detects modification but does not prevent wholesale chain reconstruction by someone with storage access. The design assumes defence-in-depth: S3 Object Lock (Compliance mode) at the infrastructure layer, hash chain verification at the application layer.
External timestamping (OpenTimestamps, RFC 3161) would definitely add independent temporal anchoring and is worth considering as an optional feature. From what I can see, Article 12 does not currently prescribe specific cryptographic mechanisms (but of course the assurance level would increase with it).
On the regulatory question: Article 12 requires "automatic recording" that enables monitoring and reconstruction and current regulatory guidance does not require tamper-proof storage (only trustworthy, auditable records). The hash chain plus immutable storage is designed to meet that bar, but what you raise here is good and thoughtful.
Another option instead of using identity is to use proof of work or hashcash such that anyone who thinks a comment is valuable can use some hash rate to upvote it. It doesn't matter how the content was generated, only that someone thought it was important, and you can independently verify this by checking how much hash effort went into hashing for that comment. This also does not require any identity either.
Having multiple different distribution channels can solve that problem. Advertisers cannot monopolize all distribution channels simultaneously because of the costs involved (it would be like someone trying to buy the whole economy).
Another idea is to simply promote the donation of AI credits instead of output tokens. It would be better to donate credits, not outputs, because people already working on the project would be better at prompting and steering AI outputs.
>people already working on the project would be better at prompting and steering AI outputs.
In an ideal world sure, but I've seen the entire gamut from amateurs making surprising work to experts whose prompt history looks like a comedy of errors and gotchas. There's some "skill" I can't quite put my finger on when it comes to the way you must speak to an LLM vs another dev. There's more monkey-paw involved in the LLM process, in the sense that you get what you want, but do you want what you'll get?
Mitchell Hashimoto (2025-12-30):
"Slop drives me crazy and it feels like 95+% of bug reports, but man, AI code analysis is getting really good. There are users out there reporting bugs that don't know ANYTHING about our stack, but are great AI drivers and producing some high quality issue reports.
This person (linked below) was experiencing Ghostty crashes and took it upon themselves to use AI to write a python script that can decode our crash files, match them up with our dsym files, and analyze the codebase for attempting to find the root cause, and extracted that into an Agent Skill.
They then came into Discord, warned us they don't know Zig at all, don't know macOS dev at all, don't know terminals at all, and that they used AI, but that they thought critically about the issues and believed they were real and asked if we'd accept them. I took a look at one, was impressed, and said send them all.
This fixed 4 real crashing cases that I was able to manually verify and write a fix for from someone who -- on paper -- had no fucking clue what they were talking about. And yet, they drove an AI with expert skill.
I want to call out that in addition to driving AI with expert skill, they navigated the terrain with expert skill as well. They didn't just toss slop up on our repo. They came to Discord as a human, reached out as a human, and talked to other humans about what they've done. They were careful and thoughtful about the process.
Apart from the external person turning out having experience with zig and macos (but not on developing terminals and rendering stuff), this is a good imo example of what ai can be used well for: writing one-off code/tools for which it is enough that it is just working (even if not perfectly), but one does not really care about maintaining, because it is meant to be used only on a specific occasion/context. In this case, the external person was smart enough to use AI to identify the problems and not to produce "fixes" to send as a PR.
Imo, an issue is that the majority of people who submit AI slop as PRs have different motivations than this person (developing a PR portfolio whatever that may mean), or are much less competent and eager to do actual work themselves (which AI use can worsen).
I myself was bitten by a radioactive grad student in 2008 that was obsessed with this idea at the time, and have since learned that almost every major household name lab PI has thought about this in one form or another.
Maybe you should check sources that have been around longer than 2017 or even 2008. Optogenetics has been established as a field since at least 2005 (you can also search pubmed for optogenetics 2005). The field had been established and well known among microbiologists for 12 years when you discovered it.
To me this sounds like a computer-generated voice for obvious pro-privacy reasons for this kind of project. If it bothers you, then maybe work on better voice synthesis tech! I assume it sounds not-leading-generation because it was locally rendered but I could be wrong.
Instead, what we're likely going to get are "voice agents" calling each other when we could have just used email instead...
reply