In summary, the answer to my, very specific, question (“is there such a tool?”) is “No, not to our knowledge”.
Which is fine.
------- Note about the scope of the question, as I intended it ----------------
A tool implementing what I described in a sane, fast, and reliable way would be much more complex than the proposed scripts; the recursive/deeply nested aspect – which I tried to emphasise in the OP, because that’s why I asked the question at all – is not at all addressed.
You need more than grepping the file list from all archives, which is trivial – and which I did before asking this question (with fd instead of find, same thing) – you need to identify archives therein, extract them, and repeat recursively. Given large archives, the memory management would not be trivial at all if you want your search to be done in parallel. Things get more difficult if you need to rely on magic numbers rather than extensions to identify archives. Then you need to extract everything (or just the first few bytes of each file if the archive format supports it). Whenever you extract anything, you need to be mindful of where you put it (tmpfs, preferably), lest you exceed the capacity of the current drive / wear out an SSD, etc, because you may need to extract GiBs of data. Those are just the obvious things if you want the tool to actually work in practice.
It’s a specialised problem that calls for a specialised tool with some actual thought and engineering put into it.
The fact that few search / grepping tools even attempt to support that kind of thing (ugrep is the only one that does it, to my knowledge) is an indication that it’s probably not trivial to get right.
At some point, I’ll probably patch together a kludge in Python to see whether I get lucky (no parallel search, no magic numbers).
In the meantime, for my immediate need, I’ll just email the original author of the files I’m looking for, and mark this as “solved” 