Yes a 30% negative reaction would be significant in medicine.
You're talking about a response rate: i.e. you ask someone to take part in a study (/poll) and some say yes and some say no. Then you do the study (/poll) on the ones who say yes. This is not the same as what data you get from those respondants.
An only 30% of people responded after being treated would probably however result in the drug being sent back to the lab to be worked on again to improve it to actually be worked on again.
Also it would be an R = 0.3. as significance
The r value does not tell us statistical significance, it tells us the strength of correlation. Significance is denoted by p value.
Whether a treatment would be accepted depends on the difference between people who receive the treatment and those who don't (placebo, normally). 30% getting a benefit is great if the alternative is zero getting a benefit without treatment.
And yet again I've pointed out numerous polls had Trump almost certain to lose. It's not an out there idea to suggest once again the polls are wrong. Even the polls saying Trump will win by a landslide (because some are saying that this time).
And yet you've pointed out yourself recently (or maybe it was someone else), I think fivethirtyeight had Trump at the distinctly plausible ~30% win chance in 2016. How on earth do you get that, and then claim the polls suggested Trump had no chance?
Let's take a fairly normal scientific paper that does a set of experiments with 7 individual experiments (n=7) compared against 7 individual control experiments, and they test it and get a p value of 0.04 (i.e. a 96% chance the difference between the two is not random). So, it's 96% likely to be true, then? No, it isn't. For various teechnical reasons about the way that statistical tests work, given those small sample sizes, the chance that result is "true" is actually likely to be far lower, perhaps as low as ~50%. Seriously,
it's true.
As a scientist, this is stuff I know. There is published data I have out there - honestly collected and analysed - that even I as the person who did the work and wrote it up suspect with hindsight and other publications is "wrong". We scientists read papers and they say stuff and
we don't necessarily believe them. Because we know it can be wrong. It's firmly established when enough people have done enough experiments. One little n=7 is worth not that much. 5 papers each with n=7s is an n=35, and that's starting to look good.
And so it is with polls. Any individual poll islow reliability, but if you start looking at multiple polls and taking averages, the results shrink.
Part of this is the algorithms applied to try and account for bias which are hard to accurately create at the best of times.
Sure, but they're also pretty good. If you know things like how various demographics vote, and vote compared to one another, it means pollsters can correct for unrepresentative elements in the data sample. If they repeatedly find in elections they repeatedly underestimate a side by a few percent, they can factor that in. Getting these accurate is tricky, but they're usually more accurate than no adjustment at all. Again, check fivethirtyeight. They not only put the polls up there, but they do a lot of other analysis. One of the things they do is analyse the quality of different polling companies. They then give them scores (A to D) to reflect how good their polls appear to be, and they measure the average error in their polls.
Again, if you're going to bring up science again, science does a lot of this sort of thing, too - attempts to make data clearer or to artificially correct for assumed errors. For instance, when I and most of my peers try to pick out very small electrical currents measured across a cell membrane, we leave it up to an algorithm, and then double check it with visual inspection to remove obvious errors. And we have to set the algorithm correctly. And there IS going to be error on every last measurement any of us do. That's just how it is.