Oh and one thing I forgot that restorer-john just reminded me of.. Intel CPU's and AMD/Nvidia GPU's revelation of silicon lottery is only evident when pushing the processors WAY beyond rated specifications. Even they cease to exhibit any discernible phenomena of silicon lottery if used out-the-box with no overclocking settings applied. Since we don't have access to "overclocking" tools of ADC/DAC hardware as consumers, we would never find any performance variance based on silicon evaluations. Do they still exist? Yes, but never where rated specifications would ever be violated (unless a company was buying silicon at the edges of a wafer or something silly like that which wouldn't be performance based silicon lottery, but instead would just contribute to many chips failing rated specification, that would never pass QC and sent to companies for final sales..).
So audio silicon lottery doesn't exist in any relevant semblance beyond the atomic/material science of silicon wafer production realities itself.
I also realized your question was also asking at the same time about DAC revisions. Which is a whole other issue to the silicon lottery phenomena of on-die architecture power/efficiency envelopes. That question on the other hand is, actually very possible to exhibit varience that a DAC maker might not be aware of changing between batches of parts ordered from companies.
Like if for instance you take a look at the RME V2 DAC, you can see potentially that the IEM port is possibly worse than the V1. Or one other area that seems to be spreading like wildfire of performance degradation, being jitter (more spikes around the main frequency tested). You can see issues like this arise when OEM's possibly not informing manufacturer customers of revisions of SKU's that are sharing the same name as the prior. This is just a whole other issue to silicon lottery that is mostly determined by the quality of silicon based on from what portion of the wafer your die is made from (the further you deviate from the center of the wafer, the lower quality the silicon essentially, and is why Enterprise industries pay and arm and a leg for realiability and consistency, to which someone like Intel complies by reserving the center wafer for Xeon parts or things like that, while Nvidia reserves it for their highest end GPU's costing thousands).