Basic logic says there is a flaw in one or both studies. This could be the new study is flawed, or it could be that a flaw was identified in the old studies, which is why the change was made and the results changed. At a surface level, there needs to be information sought out to be make a decision.
Assuming the change was the problem, without any further information is just making poor assumptions on incomplete information.
I'm not sure you understand the situation. There are three things for comparison mentioned in the first post: a survey of people across the country, an old system for law enforcement agencies to report crime statistics to the federal government, and an updated system for reporting. The survey regularly reaches the conclusion that official reporting is much lower than actual crime rate, but the two moved generally in parallel: if one goes up, so does the other, if one goes down, so does the other. The actual accuracy of the reported numbers aside, you could at minimum tell upward or downward trends with reasonable accuracy. To my knowledge, this is still true with the new system, both the crime survey and NIBRS data are showing upward trends in crime since 2020.
The individual pieces of your logic are fine: there are flaws in one or both systems, the new system addresses many flaws in the old system, there is a lack of information right now, but it's going to be better. (The old way was written records with very little specificity, the newer system is digital and allows for more precise crime categorization and doesn't require someone at the FBI to enter a pile of written records into their systems.) The transition isn't a particularly political thing, the newer system has existed for decades and has been an official standard since the Obama Administration. The FBI has just been using both systems for a while, individual police precincts could submit their numbers through either system, and the Uniform Crime Report would contain data from both sets. Now, they are only counting data submitted through NIBRS, the newer system, which is a good thing in many ways... but only if law enforcement is using the NIBRS.
The old system, including SRS data, had near 100% participation from law enforcement agencies. The new one was declared the sole reporting standard with less than 60% participation. I think this was a good decision, as you'll never get everyone to upgrade anything until they actually can't use the old way. The one thing here that's problematic is trying to compare across the two systems. Comparing from the old combined system to the solely NIBRS data, crime stats are lower overall. They actually predicted the switch would seem to increase the reported crime rate (and it still might), but the opposite happened, and it's because we're missing like 40% of the data. If that missing data were a random sampling, it shouldn't make a difference, but the agencies that haven't managed to participate in the NIBRS are understandably the police in high-crime areas. Law enforcement with fewer crimes to report and more time on their hands mostly updated their reporting standards to the better system before it was required, so the current FBI stats are disproportionately missing places with the highest crime rates.
So like, the new system is going to be better once everywhere is counted, and I would argue it's a good thing that they are starting to use it exclusively, but you can't do a meaningful year-over-year comparison between reports before 2022 and reports now, because like 40% of the data is missing. It's like yeah, the crime rate went down, but only if you compare two different systems of reporting and the new reporting is missing half of NYC and LA. Where the National Crime Victimization Survey doesn't have a sudden change in methods or a missing set of data causing bias in the results (not political bias, just statistical), and Phoenix is correct to see that as a more useful indicator of trends from 2020 to the present.
A couple of the links I read from while writing this:
en.wikipedia.org