There are on the order of a dozen basic approaches to software development and within each approach there are different methodologies. The number of methodologies for a given approach can number in the hundreds for approaches that are popular. Each approach has specific advantages and disadvantages relative to particular problem spaces and goals.
The Object-Oriented approach is one of the more general approaches in terms of the applicable problem spaces are concerned. In can be usually be employed with equal facility for IT, R-T/E, and scientific applications. However, within those spaces there are certain types of processing for which OO development is not appropriate.
One example is pure algorithmic processing. Generally procedural or functional programming approaches will be better suited. That's because procedural and functional programming is very closely mapped to the computational model of Turing and von Neumann. Algorithmic processing is expressed at the mathematical level in terms of computation so the intuitive fit is much better. In addition, mathematical algorithms are invariant while the benefit of OO lies in managing requirements change.
So how can OO development be appropriate for scientific programming at all? The answer is that an application usually involves a lot more that executing a single algorithm. There is usually a user-friendly UI and some amount of persistence. There may be interoperability issues, such as integrating with CAD/CAE tools, statistical packages, etc. Today's complex scientific problems often involve multiple algorithms that need to play together (i.e., "glue" must be supplied). Often complex problem-specific set-up processing is required, such as providing a good basic feasible solution for a linear programming algorithm. So the OO approach can be useful in scientific applications for the substantial "boiler plate" that necessarily surrounds the mathematically defined algorithms. (With good application partitioning on can even switch development approaches across subsystems so the algorithmic portion can be encapsulated in a subsystem and developed, say, procedurally.)
Another example of where OO development is not very appropriate -- though the RAD marketeers would have us believe their products are OO -- is the sort of CRUD/USER pipeline applications between RDB and UI that are fairly common in IT. [Create, Retrieve, Update, Delete and Update, Sort, Extract, and Report] All of the interesting stuff where OO could be useful has already been automated by the RAD IDEs so there is really very little left to abstract. However, IT is a huge field so there is plenty of opportunity to apply the OO approach outside of data entry applications.
Another arena where OO development has limited value is in language translators (cross-compilers) and similar applications where each application function is myopic in that it is independent of what any other function does. Thus translating Java statements to C# statements is done pretty much in a linear fashion on a statement-by-statement basis dictated by the grammar productions. The OO approach only shines when there are many relationships among many entities that each involve complex collaborations. OTOH, generalizing such an engine so that it can translate between, say, any two LALR languages given input BNF definitions is ideally suited to OO development. That's because one can abstract the invariants of grammars and translation at a higher level than individual languages and those invariants will have complex interactions.
A more important area to understanding OO applicability is in terms of goals. All software development has the goal of building a correct application; if it is incorrect it just isn't finished yet. However, there are many other goals that vary with business and development context. Performance, speed of development, maintainability, reliability, and reuse are just a few of the possible goals. Each of these will be given different weight in a given development environment and how appropriate OO development is in that environment will depend on those weightings.
The primary goal of OO development is to provide maintainable software over time. It is not only the primary goal, it is far ahead of all the rest. The OO approach was designed to address the recognized maintainability problems of the Hacker Era ('50s and '60s) where making changes took 10-50 times more effort that writing the original and the SA/SD/SP Era ('70s and '80s) where changes took 5-10 times more effort. Thus the OO approach is ideally suited to any environment where requirements are volatile either during the development or over the application life.
The second most important goal, at least originally in OO development's formative years in the '70s, was to provide a direct mapping to the problem space. The abstraction that is systematic and ubiquitous in OO development was expressly designed to provide that mapping. There were two reasons. One was to bridge the gap between natural language requirements in the customer's terms and very disciplined computational model. The idea was that OOA models could provide a bridge for computer-illterate customers to validate the rigorous specifications needed for the computational model. (Alas, that never really worked out very well; OOA notations carry too much semantic baggage for the customers to learn.)
The second reason was based on the notion that customers don't like change any more than software developers do. So customers will accommodate change in a fashion that causes the least disruption to their existing processes and infrastructures. If the software structure closely parallels the customer infrastructures, then the software should also be minimally disrupted by change because the customer has already figured out the least painful path. This would be especially true if one extracts invariants from the customer space to abstract as the software "skeleton". Thus the OO approach uses problem space abstraction as a crucial tool for providing long-term structural stability. (Fortunately this reason has lived up to the initial expectations over the years.)
A third goal was reuse. Logical encapsulation, implementation hiding, and decoupling interfaces all enable reuse. Originally the hubris of the '70s focused on class level reuse and that was quite successful for computing space entities (String, Array, Stack) that were mathematically defined. It was less effective for problem space objects that tended to be highly complex and loosely defined with myriads of views. However, large scale reuse at the component and subsystem level is alive and quite well because one can tailor the class level abstractions that implement the component or subsystem to that specific context. In addition, the component or subsystem semantics itself is limited to the nature of the subject matter.
A fourth goal is improved reliability. I'm not sure if this was a Founding Fathers' goal, but it seems to have worked out that way. When we first tried OO development we did pilot projects to evaluate it and we collected a ton of data. The most surprising single thing was that our defect rate was reduced roughly 50%. I honestly can't point to any demonstrable reason why, though there are a number of plausible reasons (e.g., encapsulation forces one to think in a highly focused manner and that focus might reduce defect insertions). However, hard data is hard data is hard data...
Other goals address issues like ease-of-cosntruction and efficient mapping to the computational model. However, these were definitely tertiary. Thus OO applications usually have acceptable performance and they can built in reasonable but time, but typically they can't compete with other approaches that treat those goals as primary. One way this is manifested is that OOA/D/P is not as intuitive as other software development approaches so there tends to be a longer learning curve to do it properly. [That doesn't mean some people will never be able to "get" it. Anyone who can spell C can learn OOA/D/P; it will just take them a little longer to get good at it than it takes for C.]
[A lot of NIH shops will jump on performance as a reason not to use OO development. I would point out that OO is now used extensively in R-T/E where performance is usually pretty important because the processors are quite dumb. In addition, most serious performance problems live in fundamentally poor design, not cycle counting. And cycle counting problems can often be optimized locally. I spent the better part of two decades doing OO R-T/E development and there were only a couple of time places where we had to resort to Assembly at the method level. Finally, if one uses translation, one can target a non-OOPL for code generation if one needs instruction level optimization.]
Unfortunately that learning curve carries a significant cost. Years ago I was at a social event and got into a conversation with a stranger who was also a software developer. The conversation basically went:
Him: So you do OO. We are just starting out using it. We're rewriting 18 MLOC R-T/E system from scratch doing OO.
Me: Great. What methodology are you using?
Him (looking at me as if just discovering I had Alzheimer's): Uh, you know... Objects... UML..."
Me: I meant, what sort of analysis and design approach are you using?
Him: I'm not sure; the instructor didn't mention a specific name.
Me: Hmmm. What sort of training are you getting?
Him: We had a week's course on C++ and a couple of our June Grads took OO courses in college.
Me: I meant, what sort of consulting and mentoring will you have?
Him: We didn't have the budget for that.
Me: And you shop size is...?
Him: About 150 people.
Me: Well, good luck. Excuse me, I need a refill...
That project is going to crash and burn with absolute certainty. The only question is how long it will take to realize it is doomed. Going into a major project using a sea change like OO development without adequate training is a guaranteed disaster. Sadly, it will probably end up in the annals of Great OO Failures even though OO development will have nothing directly to do with the failure.
[It never ceases to amaze me that a company will spend $50K evaluating copy machines and training AAs for half a day to use them yet the same company won't hesitate to let developers apply an entirely new development approach that they know nothing about to a project that might kill the company if it failed. But I digress...]
In summary, consider OO development if your requirements are volatile, your applications are long-lived, you want to improve reliability, and/or a significant portion of your developers' time is spent doing maintenance to existing applications.
If so (for most shops the answer here is: who doesn't?), then make sure the shop isn't in a niche where the OO approach isn't very useful. If that's OK, then don't bet the farm by committing to it to a major project without proper training.