Haven't been really following this conversation here, so I apologize if I am going off the current focus. But I found this pretty interesting - not because of the data itself, but because of how it illustrates just how frustrating and confusing it can be to present data visually in a way that communicates everything you want to convey. I got this graphic from CNN (they sourced it from JHU's overall excellent site.) Simply put, this is an absolutely terrible data representation:
View attachment 124064
At first glance, the chart seems to be directly contradicting the statement on growth just above it. The black curve is the full US, and the claim says that cases are doubling every five days. But the line (which is on a log scale for sake of reasonable comparisons of differentially sized states) very clearly implies that the rate of doubling for the US actually falls someplace closer to 2 days. You have to read the fine print to see why neither is incorrect - doubling time is actually measured only over the last week for the statement above the chart, but the x-axis for the chart is "days since 50th case." That's considerably longer than one week for most of the states here. So the chart and the statement would only agree for states that reached their 50th case only within the last week (e.g., ND is probably close to that range.)
This is the kind of thing that a lot of people just ignore, and in most situations (think those old USA Today Infographics) it really isn't a big deal. But sometimes it can be a big deal, and that's kind of scary in some ways because, in this case, we're talking about how people consume information during a crisis situation that sorta requires we all have a pretty good idea and some level of agreement on what is actually going on. The reality here is that you could have two different people look at this graphic and come to two different conclusions on the rate of growth: the US is doubling every five days if you take the statement at face value and every 2.5 or so days if you just read the graph. Both are right within their own context because they are concerned with a different sample that supports their conclusion. The scary part comes into play when those numbers are used to proxy or motivate action - do you behave differently under each scenario? If the answer is even "possibly", then this kind of graphic can be dangerous. And admittedly, it's tough - a large part of my career has been spent taking technical analyses and converting them to easily digestible summaries that are going to be used by non-technical people to make decisions that involve significant dollars, or impact large groups of people. And I figure if I say I've been successful at it half the time, I'm being overly generous to myself. But it's worth the time to at least try to get this kind of stuff right. And I think they missed that here, and in several others I've seen recently.