This is one of those rare exceptions.
He starts with a description of how debates settle nothing but the question of who is the better debater. The problem is he then misses the point:
When I listen to a debate, at the end I’m only persuaded as to which of the two opponents is the superior arguer. I’m not convinced one position is superior to the other because the whole point is that the superior debater could just as easily persuade me of the opposite proposition.
The real takeaway from that observation is that any debate essentially demonstrates who is the better debater. In order know which one is right, you have to understand what they are saying, know how to look at evidence beyond what either of them may have presented, and know how to decide which one is correct. The key point being that this is true whether or not the debaters passionately believe in what they are arguing or they are just an advocate for a client. The pureness of motives of the debaters doesn't affect this, it is a matter of the effect on the listener.
Missing that point he then goes on to the scientific method, extolling the virtues of two debaters (Leonard Susskind and Stephen Hawking) because their motives were pure. But in reality if you don't understand the subject matter of the debate, you are only judging by the debating skills of the two.
To compound the error, the article completely misses the different techniques that the scientific method has developed to insulate the biases (such as falling for superior debating skills) from effecting the results of the research, and instead seeks to apply the scientific method of falsifiable predictions. He explains:
Here it is: Scientists understand they can’t ever prove a theory. All they can do is try to disprove them (”falsify them,” according to the vocabulary introduced by Karl Popper, the epistemologist who first explored this subject in depth). Fail to falsify one enough times in enough ways and researchers start to gain confidence that they’re on the right track.
The way they try to falsify a theory is to explore its implications. Some of those implications result in predictions — that under a specific set of conditions, the researchers should be able to observe a specific phenomenon. The researchers then either create those conditions or find them, and either observe the predicted phenomenon or something different. If they observe something different they’ve falsified the theory.
He then sets up what are essentially straw man predictions and attempts to apply them to IT theories that he doesn't like. However, having dropped one of the main controls for bias in science (a reproducible experiment), the value of the scientific method and its applicability to IT management is really no where supported by what came before it in the article.
Fans of the scientific method tend to not like the fact that it doesn't apply cleanly to all situations. Not all situations can be analyzed with repeatable experiments. And not all situations can make both accurate and meaningful predictions. IT governance is such an example.
In Bob Lewis' own words, here is the prediction he ascribes to the view that IT should be run as a business:
For example, if IT organizations that are run as a business deliver more value than any others, then companies within which they are run this way should be more successful and profitable than competitors that don’t.
Except this would be a bad test. The theory isn’t that running IT as a business is better than the average of all other ways. The theory is that it’s best practice — that it is superior to all known alternatives.
So a better test would be to compare running IT as a business to a specific alternative. Here’s one: Integrate IT into the enterprise, providing strong strategic business leadership and disciplined governance. Compare the success and profitability of companies that use the two different approaches and you’ll have made a start.
The strawman here is idea that a given theory about IT governance predicts demonstrable "success and profitability." Sure, all things being equal, but they never are. Consider Company A, in 2006, heavily invested in Sub-Prime mortgages. Consider Company B, in 2006, heavily invested in Government work that was heavily funded by the American Recovery and Reinvestment Act (otherwise know as the Stimulus package). Is their theory of IT governance going to be the controlling condition of their success between 2006 and 2010?
I suppose you could argue that a good, strategic, IT organization would have predicted the risk of sub-prime mortgage investments, but even if the company had diversified, in mid-2008 it would have looked like a failure.
Of course I am picking on the most extreme, obvious example, but even something as simple as were you Starbucks or were you McDonald's at the end of 2008, what happened afterward (Starbucks down, McDonald's up) has nothing to do with IT governance, and even if Starbucks had the Integrated IT approach and McDonald's had the IT as a Business model, the "success and profitability" of each company in fact tells us nothing about their IT practices.