One thing that anyone who's actually bothered to read my posts on scientific usefulness may have noticed is: scientific usefulness and truth are not the same thing by a long shot. They cover a lot of the same ground, but there are things that fall into one and not the other.
For example, "emergent" phenomena like tornadoes are scientifically useful categorisations without being literally true. There is fundamentally no such thing as a tornado; it's just a label we apply to a really diverse array of air patterns. Useful but not true.
On the other hand, it would be possible for something to be true but not scientifically useful. For example, maybe there's a universe somewhere where Darth Vader really exists. If so, that would be true - but, since there'd be no way of having any contact of any sort with that universe, that truth would be completely useless.
However, there's another class of potentially true statements that are more problematic. These are statements that, if true, would be scientifically very useful, but that aren't testable - there's no way to tell whether they're true or false in advance. For example, if God exists and atheists really get sent to a big fiery pit after death, that would be very useful, but as yet no-one's come up with any experiment that could test for this.
We're getting into Pascal's Wager territory here - if these statements are potentially so significant, are we justified in ignoring them simply because we can't immediately assess their usefulness?
I would say yes. My rationale is that, by declaring these statements untestable, we're making it impossible to distinguish between them and the infinite number of other untestable statements that would counsel different behaviour. This is a classic refutation to Pascal's Wager - the possibility of a God that gets angry when people worship Him effectively cancels out the possibility of a God that gets angry when people don't. Until we find some way to test these statements, it's impossible to make useful decisions based on them - there are just too many conflicting options.
There's no little black box that will test the statements we feed it for truth. One could say that truth is not a useful concept, except insofar as it relates to predictivity. As far as predictivity is concerned, we do have such a little black box - and it's called science. That's as good as it gets, folks.
[Edit: on reflection, I think this cartoon said it better.]