55

I am just finishing my masters degree (in computing) and applying for jobs. I've noticed many companies specifically ask for an understanding of object orientation. Popular interview questions are about inheritance, polymorphism, accessors etc.

Is OO really that crucial? I even had an interview for a programming job in C and half the interview was OO.

In the real world, developing real applications, is object orientation nearly always used? Are key features like polymorphism used A LOT?

I think my question comes from one of my weaknesses. Although I know about OO, I don't seem to be able to incorporate it a great deal into my programs.

Ixrec
  • 27,621
  • 15
  • 80
  • 87
ale
  • 1,046
  • 1
  • 10
  • 18
  • Brutally honest but probably correct. My masters is in AI but I should be more proficient with OO from my first degree :S. – ale Jul 04 '11 at 10:44
  • 3
    All is not lost, though. Recognizing there is a problem is the first step to correct it :) – devoured elysium Jul 04 '11 at 10:45
  • 37
    It took me several years to understand WHY exactly OO is a useful concept. I could understand all the technical parts, but just wasn't able to find anything of that useful. I guess a lot of that was from dumb examples I've been shown with Dogs extending Mammals extending Animals... What opened my eyes, was a look into OO design patterns, especially Listener (a.k.a. Observer) and Strategy patterns – Mchl Jul 04 '11 at 10:46
  • @Mchl: Your comment makes a lot of sense. I too got the "Dogs extending Mammals extending Animals" examples. I've written so many small programs but I need to work on a larger one to convince myself that OO is useful. I'll take a look at those patterns sometime :).. – ale Jul 04 '11 at 10:56
  • 1
    Yes, it is. Honestly. – quant_dev Jul 04 '11 at 11:05
  • 6
    See the answer by Thorbjørn Ravn Andersen. To be a good programmer, you need to know modularity and API design. Not all programmers in the industry are good, but most use OOP. Unfortunately, the mix of OOP and non-modular code and poor APIs leads to poor quality software, and you'll see a lot of this kind of code at work. Unless you write a lot of code on your free time, you probably don't know much about "real-life" OOP. That's OK, you are not alone in this position. – Joh Jul 04 '11 at 11:13
  • you can't have finished a master degree and still not understanding how much popular/important is OOP – dynamic Jul 04 '11 at 14:55
  • 1
    Ho yes he can. And believe me, if you have the same understanding of OOP that you had when got your master degree, then you probably don't know anything about OOP. – deadalnix Jul 04 '11 at 17:32
  • @yes123: Not really a constructive comment - probably why 66.67% of your questions have been closed. I just haven't experienced a **large** project to be able to **fully** appreciate OO programming. @all: Thank you all for the useful comments/answers. – ale Jul 04 '11 at 18:37
  • 1
    Yes, it is. After you have the concept, google makes it pretty easy for you to learn any language syntax – Felipe Sabino Jul 04 '11 at 21:27
  • @alemaster: you can take this personally but If you read my comment it wasn't meant like that. For my experience already in the first year of my simple BD our teachers taught us how important OOP is. Not to mention how much they did in the 2nd and 3rd. And my accademy is in Italy where technologies like this are behind years. So I wonder how awful are the accademies you have frequented. – dynamic Jul 04 '11 at 23:30
  • 1
    Do not worry about the hardest OO concepts too much. In the beginning, you need to use/understand OO code written by others. "This class is your toolbox, you use inheritance to give your basic module the necessary functionality". You create OO projects from scratch much later. So, understand the theory, then observe usage in real life, use/extend existing projects, and much, much later project your own. – SF. Jul 05 '11 at 14:13
  • 1
    While you may have interviewed for a C programming position, a well designed C application still has many of the same characteristics as a well designed OO application. Data hiding, good modularity and meaningful separation of concerns make all the difference between a mess and a maintainable application. If you know what OO is trying to accomplish then you are likely to apply some of those principles in your C application, which in general would be a good thing. That's probably why they asked you about it. – Dunk Jul 05 '11 at 16:25
  • It is important but not the solution to all problems. Keep that in mind because many programmers (and employers) think that OOP is the only and best way to solve all your problems. – sakisk Mar 17 '12 at 09:58
  • Skip over the OO bigots and become intimately familiar with functional programming (like Haskell). So when they ask you "explain polymorphism", go ahead and give a perfunctory answer, and then ask them "do you guys understand the difference between parametric polymorphism and bounded polymorphism? No? Oh, well let me explain it to you. Seeing as how you've never used a functional programming language, it's understandable how you might not get it at first. Remember when you learned about generics? It's kind of like that." – Calphool Oct 20 '14 at 19:53

17 Answers17

84

OOP is a paradigm that allow your program to grow without becoming impossible to maintain/understand. This is a point that students almost never get because they just do little projects lasting from two weeks period to two months at the most.

This short period is not enough to make objective of OOP clear, especially if people on the project are beginners. But sticking to some modellisation is crucial for big projects, I would say >50,000 lines of code. OOP isn't the only solution to that, but this is the most broadly used in the industry.

This is why people want you to know OOP.

I would add, by exprience, that almost all junior programmers have serious flaw in modellisation and OOP. Most of them know how to write classes, inherit from them and basic stuff like that, but they do not think in "OOP" and end up misusing it. This is why any serious recruiter will always look what your competencies are in the OOP domain.

As these things are not learned at school, there is a simply enormous variation of knowledge between different candidates. And let's be honnest: I don't think someone whih poor knowledge in OOP could work on any big project, simply because it would require more time for lead devs to manage these people than simply writing the code themself.

If you don't think "OOP" yet, I would suggest you to read some books about it and apply in company that does not have really big projects; to get used to OOP keeping doing useful work for your employer (and as long as he/she is giving you your salary, this will be usefull for you too).

EDIT: ha, and I would add that I already wrote OOP code in C, even if it's not the most common usage of C, this is possible with strong knowledge. You just have to build vtables manually.

And behind OOP technique, something is hidden: software design. Software design, is really helpful, in C as in any other languages. Many recruiters will test your software design competencies, and OOP question are good for that, but OOP isn't the main thing that is being tested here. This is why you have those questions even for a C job.

deadalnix
  • 5,973
  • 2
  • 31
  • 27
  • 2
    And yes.. as I wrote in a previous comment, I think I need to work on a larger project to appreciate OOP :). – ale Jul 04 '11 at 10:59
  • +1 on the software design. Most of the actual software design take advantages of the OOP concepts. – Gabriel Mongeon Jul 04 '11 at 15:33
  • 5
    You don't need vtables to be OOP. simply using `struct` and functions that work on that structure in C is OOP. – edA-qa mort-ora-y Jul 04 '11 at 17:00
  • That's why I love the fact that we learned OOP right away, even before we learned much about programming. Used this book: http://www.bluej.org/objects-first/ Found it very helpful. The BlueJ program might seem silly if you're used to IDEs, but I actually preferred it over things like Eclipse and NetBeans in the beginning :p – Svish Jul 04 '11 at 17:19
  • 2
    @edA-qa mort-ora-y struct doesn't give you OOP functionnality. Polymorphism ? Inhéritance ? Does that means something to you ? OK, so, how do you implement virtual functions without a vtable ? – deadalnix Jul 04 '11 at 17:35
  • 2
    @deadalnix: you implement them the same way .NET and Java does Multiple inheritance. You should know that the first C++ compilers weren't compilers at all, they were translators that took C++ code and turned it into C code that was passed to a C compiler. Google "CFront". – gbjbaanb Jul 05 '11 at 09:09
  • 3
    Java and ;NET doesn't do multiple inhéritance. And yes, C++ is translatable in C automatically, but this is just completly unrelated to the problem of using vtable or not. In fact you have to : you cannot implement virtual functions without a vtable. – deadalnix Jul 05 '11 at 09:29
  • @deadalnix, You can use `struct` + `template` and easily forget polymorphism. – ar2015 Mar 02 '18 at 07:48
  • @community wiki, You mentioned `big project`. How about `Linux` and many GNU applications? Aren't they big enough? – ar2015 Mar 02 '18 at 07:59
  • I'm a big fan of struct + templates. This technique is often referred as compile time plymorphism, and for good reason, you end up reusing the same techniques as for OOP and/or functional, but with a different set of tradof: faster code at the cost of less runtime flexibility. In term of high level software design, they are just the same. – deadalnix Mar 03 '18 at 11:51
38

The overwhelming problem with computer programming is handling complexity, and modern programs can be very complex indeed, and this appears only to increase.

Much of the work done in software engineering of non-trivial computer programs concentrates on taming complexity and make it accessible to as many as possible without devoting a lifetime of learning first.

Examples:

  • Modularization: You make programs conceptually simpler by having modules of code, where each module only know a little about other modules (instead of e.g. a mouse icon drawing routing being allowed to manipulate network card buffers directly).
  • API's: They give a simple usage path to the complex programs behind the API's. When you open a file you do not care that network shares are handled different from an USB-disk. The API is the same.
  • Object orientation. This allows you to reuse existing code and make it work transparently with new code you add, while still hiding all the underlying complexity.

In other words, knowing a lot of tricks is necessary if you want to work on non-trivial pieces of software, either alone or (most likely) with others.

  • 7
    I like that you made a difference between modularity, APIs and OO. I think there is a wide spread misconception in the software industry that OO means all these things. – Joh Jul 04 '11 at 11:08
  • 3
    Even knowing OO itself is not a hard-set requirement, Procedural and functional paradigms are enough in certain areas (C or Haskell). You still need to learn Modularization and API design. – Raynos Jul 04 '11 at 12:44
  • @Raynos, in C you have function pointers which allow you to do explicitly what OO does intrinsically. Similarly Haskell uses pattern matching to explicitly do what the compiler does in e.g. Java. –  Jul 04 '11 at 13:25
  • @ThorbjornRavnAndersen it's a bit niche to write OO style C and Haskell. I'm just saying that reusing code can be done in the procedural and functional paradigm. – Raynos Jul 04 '11 at 13:38
  • @Raynos, pattern matching is not OO style Haskell. Function pointers are not OO style C. But the goals achieved are the same as OO achieves. –  Jul 04 '11 at 13:40
  • @Raynos OO Haskell? Unless you mean existential types, no. – alternative Jul 04 '11 at 22:29
  • 3
    @mathepic I'm not saying one should write OO haskell (or whether it even exists). I was saying you don't need to know OO. There are other ways to handle complexity (FP) – Raynos Jul 04 '11 at 22:36
  • @Raynos But nobody really writes OO style haskell it the first place - it is not niche. To do it, you would have to go insane with typeclasses and rank 2 polymporphism, and it would just be in general quite ugly. Where do you get this as a niche? – alternative Jul 04 '11 at 22:38
  • @mathepic niche is the wrong word. I meant to say writing OO C or Haskell is very uncommon / rare / obscure. – Raynos Jul 04 '11 at 22:51
  • +1 for the term "taming complexity". This is the real duty of a programmer. – Karl Jul 05 '11 at 13:06
14

Yes, primarily because perhaps the two most popular development platforms used in commercial development (Java and .NET) are object oriented and that means yes, OO is used a lot (including polymorphism, inheritance and everything else).

Companies don't specifically care about object orientation as a technology - this isn't an ideology thing, they care about people who can develop solutions to their problems in ways which align with their IT strategy.

But I wouldn't worry too much about feeling it's a weakness. Without disrespecting your education, most people in the commercial world don't see programmers leaving university (at any level) as the finished article. You've still got a lot left to learn and that's understood (probably better by the companies than the students).

Jon Hopkins
  • 22,734
  • 11
  • 90
  • 137
  • 2
    Have to call you out on companies 'not caring' about OO - good companies do care about reusable/maintainable codebases, and OO patterns are the recognised way to do this. – HorusKol Jul 04 '11 at 23:57
  • 1
    @HorusKol - They do, yet despite that Perl, Cobol and Visual Basic were all commercially successful and Smalltalk wasn't. You're right companies like maintainable code but it's not an absolute requirement, they'll weight it up with other factors. – Jon Hopkins Jul 05 '11 at 08:21
  • Well, Cobol was around before OO. I can't comment on Smalltalk, but I imagine that there must have been problems with it if it wasn't picked up. – HorusKol Jul 06 '11 at 02:11
  • 1
    @HorusKol - Actually Cobol and OO came into existence around the same time (late 50s) but even if we assume that OO didn't really start to take hold until the 70s or even the 1980s (it you wait for C++) if it's THAT big a deal for companies why did it not catch on until the late 90s (with Java). The answer is that there are ways of having a maintainable code base other than OO - companies care about maintainable code but there's more than one way to skin a cat and OO isn't the only solution. – Jon Hopkins Jul 06 '11 at 15:17
  • It's kinda weird to call the platforms themselves OO. Clojure isn't OO. Scala has some OO elements I guess, but is best used in a functional manner. F# is kind of the same deal. writing OO code in F# is just dirty. – sara Jun 21 '16 at 08:43
7

As in real life, the real life programming differs from that in theory.

Yes, if you keep OO paradigm polished and always at the back of your mind, you can do better at writing code that is manageable, understandable and easily extensible.

Unfortunately, the real world has this:

  • project time pressures
  • procedurally oriented team members
  • cross-located teams, multiple vendors
  • legacy code not having any orientation whatsoever
  • as long as it works, a few care about how code is written
  • even if code does not work, motivation is to fix it, not "OO" it
  • modules, platform limitations, frameworks which simply don't allow you to do good OO

In a real job, you have to work with above issues. This sounds demoralizing. But, treat this as a heads up. Hiring companies place too much importance on OO while hiring. It's easy to see why. The only way they can test the candidate is asking about the understanding of OO. And unfortunately, many candidates just brush up on those questions before turning up for an interview.

Real-life OO comes slow. It helps if you keep reading and keep improving it over the time.

Raynos
  • 8,562
  • 33
  • 47
Amol
  • 143
  • 7
6

I had quite the same feeling upon finishing my Bachelor's Degree, and a great book that showed me why and how OOP is relevant for real-world applications is Head First: Design Patterns. I sincerely recommend you take a peek, it's written in a really fun way and makes a lot of valid points of why an OOP-approach is desirable when working with larger scale, constantly-changing systems.

David Haas
  • 41
  • 1
6

Even for some jobs in C, you might need to know object oriented design (and probably be better at it than if your compiler did it for you), as evidenced by a recent series of articles on object-orientied design in the Linux kernel. (Part 1, Part 2)

GTK+ also uses a lot of object-oriented design patterns.

Ken Bloom
  • 2,384
  • 16
  • 20
4

Jon Hopkins wrote:

Yes, primarily because perhaps the two most popular development platforms used in commercial development (Java and .NET) are object oriented and that means yes, OO is used a lot (including polymorphism, inheritance and everything else).

Which is pretty much what I was going to say, but it's not just Java and .Net, C++ is everywhere, Objective-C is all over OSX, all the cool kids are doing Ruby or Python, and all of these things and many many more have a focus on object orientation. A lot of newer languages are multiparadigm, so something like F# is primarily a functional language, but also supports object orientation. It is everywhere, and having at least some understanding is very useful. Don't fret too much about it though, having just completed university courses means that you're ready to start learning about developing code in the real world :)

Pierre.Vriens
  • 233
  • 1
  • 2
  • 11
eviltobz
  • 161
  • 2
4

I have to express some disagreement with this notion that OO is everything - one could say OO allows you to build cities, but procedural programs are the bricks.

To give my answer in the form of an analogy, a general needs objects, the soldier needs procedural. Once you drill down enough in OO you find procedures, and if that is your expertise and you're good enough, don't worry about OO, cause it's easy enough for somebody to write this OO chess game code:

-findBestMove
-makeBestMove
-waitForPlayerInput

but then somebody has to write the code behind -findBestMove and you can be sure it isn't just this:

foreach $move (@moves){
    $bestMove = ($move > $bestMove ? $move : $bestMove)
}
return $bestMove

On the other hand, if you don't know how to read the OO code, worry. Because you can be sure (almost) that your code will be messing with objects of some sort. Unless you work on the fortran legacy behemoth of 12000 global vars and 1200 line "modules" I currently maintain.

Alex
  • 21
  • 1
3

I've done programming for a long time, and I find the concepts of OO are useful even when programming in C -- even though if tested I would probably fail to describe those concepts in every tiny detail. At one point, I even created an OO language, albeit a rudimentary one, to get my head around the concepts and to find enjoyment in OO from a new angle.

BTW, C++ has made a huge and ugly mess of OO, whereas Objective C does it right.

About interviews, they have become a horror show -- from both sides of the table. Most interviewees are very freaked out by them. Most hiring managers are astonished by how many people fail even very basic programming tests.

That said, there are some enormous douche bags working in the software industry right now who know NOTHING and yet expect the world from prospective employees.

Ponk
  • 471
  • 4
  • 8
  • C++ is a multi-paradigm languages but handles OO pretty well.. no? – Nikko Jul 04 '11 at 14:10
  • 1
    I would say the C++ gives you the ability to make a huge ugly mess. If done correctly its beauty is its simplicity. – Martin York Jul 04 '11 at 17:42
  • Skipping the basics always gives huge mess. C++ just have large amount of basics you need to understand. This is why it's possible to use C++ for large programs. It's necessary. – tp1 Mar 17 '12 at 11:22
  • @Ponk: *BTW, C++ has made a huge and ugly mess of OO, whereas Objective C does it right.* - I haven't tried Objective C, so I have no reason to doubt you, but where can I find a more thorough and convincing argument for this? – Jim G. Mar 18 '12 at 08:15
3

Learning OOP is not as useful as learning software development. Go read Code Complete 2.

Sure it's a useful tool but OOP itself is really small. In general when companies and recruiters say "OOP" they mean "Software development". It's being used as a generic term.

Real recruiters will tell the difference between you knowing how to develop software and matching the "Has 3 years in 'OOP'" tickbox.

Raynos
  • 8,562
  • 33
  • 47
1

OOP is not important because of itself, but because of what it takes with it. Something that deals with the capability to abstract and isolate, group things together end expose only the parts that are required to interact together.

This is a common engineering technique called "modularization", that allows to create complex systems as aggregation of simpler ones, without to take care of every single details at high level, and that require components to be replaceable, even without them to be exactly the same.

Those "engineering concepts" have been tried to be kept into the software development from the time software product themselves had become larger than the "single developer capability", thus requiring a way to make developers to work on independent pieces, and let those pieces to interact together.

That said, those principles are not necessarily found only in OOP (it the computation theory is valid, there are infinite possible methods to come to those results).

OOP is simply a successful attempt to put those things together, giving to those general terms (like modules, encapsulation, substitution) more precise definitions and elaborate conceptualization on those definitions (patterns) that can fit into programming languages.

Think to OOP first not as a "language feature" but as a "common lexicon" that makes software engineers approach the software design.

The fact that a given language has or not primitives that directly enforce that lexicon ensuring -for example- that a "capsule" is not opened inadvertently by who is not supposed to do that is a secondary aspect of OOP design. That's why even large C project are often "managed as" OOP, even if the language itself offers no direct support to that.

The advantage of all that is not recognizable until a project size stay into the single developer capability in understanding and track everything he does (in fact, in those situation it may be even seen as "overhead") or into a small group developing something in a short period. And that's the main reason juniors who studied OOP in term of a "language feature" often misinterpret it producing bad designed code.

How OOP fits into languages depends on how language designers interpret the OOP principle in their own construct.

So "encapsulation" in C++ becomes "private members" (and a "capsule" become a class), "substitution" becomes virtual functions override or template parametrization/specialization etc, while in D a capsule is a "module" (and substitution goes through classes etc.), thus making certain paradigm or pattern directly available in a given language and not in another and so on.

What recruiters seek in asking OOP question is just check your capability to abstract and concieve software design for future large projects and development. OOP, for them is just a "dictionary" they supposed both you and them know so that you can talk about other more general things or concretize into a specific implementation.

Emilio Garavaglia
  • 4,289
  • 1
  • 22
  • 23
1

The answer is yes, as several others have noted.

BUT, if you want to work on piles of non-OO procedural spaghetti code, you can find that out there too. I think you will prefer the OO work.

EDIT: Forgive my case of gunslinger's cynicism and sense of humor. As Raynos said, just because something is OO doesn't mean it's good. Proper application of OO takes real work and thought; just having instances of it does not automatically mean an app is well made. And conversely, I'm sure there's well written procedural code out there. My experience in corporate IT shops through the 90's and 2000's has been that a lot of bad code was written and probably still exists. But closer to the OP's question, I have noticed that the smarter developers are, when given the chance, moving to more OO systems.

Bernard Dy
  • 3,188
  • 26
  • 33
  • 3
    -1 for implying non-OO code is spaghetti. and that OO makes good "not spaghetti" by black magic. – Raynos Jul 04 '11 at 12:48
  • @Raynos That is a fair point. And just because something doesn't use OO doesn't mean it's bad. I will edit. – Bernard Dy Jul 04 '11 at 12:50
  • Its also not just a procedural vs procedural/OOP, theres also functional paradigms. – alternative Jul 04 '11 at 22:46
  • I've worked on un-maintainable OO apps where objects were scattered around like confetti. OOP is not a magic bullet, its just a more defined way of organising your code. – gbjbaanb Jul 05 '11 at 09:15
  • 2
    Yes, yes, a thousand times yes! Please see the edit. My comment was really more a barb at the numerous instances of poor legacy code I've had the pleasure of working on than it was a particular snub of procedural or OO. But if I had the choice, I'd rather work on a well-designed OO system than a well-designed procedural system; and a well-designed system over a poorly-designed one any day. – Bernard Dy Jul 06 '11 at 00:43
1

OO is a fundamental basis on which other techniques are built. A key point is to first fully understand the difference between a type (class) and an instance of that type. Don't try to read on without fully understanding that (thinking it will become clear later), because you'll have to read the rest all over again once you catch the vision.

Once you get the hang of it, you'll never want to do without it. I'm not a purist when it comes to encapsulation, patterns, frameworks or whatever. On the job, you'll have to adapt to various views and concepts. I'll list some previous job experiences of my own:

At one company, my peers wanted as much as possible lazy-loading (empty constructors, bulky properties that had to check for null values everywhere). They were building web-based server-side objects that lived a short life.

The next job was totally opposite. Objects lived inside a desktop (Excel-based) application. As much initialization as possible should be in the constructor (or one of the many constructor overloads). Empty constructors were not allowed since empty objects had no right of existence (which made persistence quite a challenge). In addition I had to adapt to their "coding style standards" (where to open parenthesis, add whitespace after comments etc...), because my code could not be checked in if it didn't get through style-cop.

Currently I'm working at a company where none of the developers ever tried to understand OO. It's hard to express how extremely frustrating that has been. I've had to improve my Grep skills, in fact I have a HotScripts macro assigned to my F12 key in order to do a grep on the selected text. I'll spare the other frustrations...

Once you obtain OO skills, you'll almost be allergic to spaghetti! However in all cases, OO-or not, be patient and adapt. Be reluctant to "throw it away and start over". Your boss will rather choose you when it comes to throwing out. Unfortunately "making mony" is more important than elegant code.

Sorry for the long answer but I tried to cover most of the scope of your question :-)

Louis Somers
  • 661
  • 5
  • 9
  • Thank you very much for your stories.. I enjoyed reading them! Currently I'm working on a C++ project and I'm using this opportunity to think of possible ways I could use OO techniques. It's going well at the moment :). +1 for you answer. Thank you. – ale Jul 04 '11 at 20:33
0

It depends. One reason you need to know OO because it's the lingua franca of the programming world. As another answer points out, nearly every major language is OO in some way, which means that basically any company that might hire you is using an OO language. Ever try hiring OCaml programmers? It's impossible; the talent pool is too small. If you started your company using OCaml and your company becomes successful, you won't be able to hire programmers fast enough and you'll go out of business. Therefore nearly every company with more than 3 programmers uses an OO language, and in order to communicate with your coworkers and use your platform's libraries, you'll need to have an understanding of OO.

Depending on the particular language the company is using, inheritance and polymorphism is either extremely important or just moderately relevant. You can't do anything in Java without busting out 10 GoF design patterns and a dependency injection framework. Java is one of the most widely-used languages, therefore OO principles are really important to the many companies using Java.

If the company is using a modern hybrid OO/functional language that has lambdas and function pointers, like Scala or C#, then inheritance suddenly becomes less important because you have higher order functions to handle lots of the really simple things that would otherwise require lots of ceremony. However you still need to be able to work with OO stuff because most libraries you use will be written in an OO way.

If inheritance and polymorphism aren't important to the company, you're still likely to get OO questions because:

  • You are interviewed by an HR person who knows nothing about programming and is asking questions off a checklist. Do you have 3 years of X, do you multitask, etc.
  • You are interviewed by an engineer who wants to make sure you haven't been living under a rock. OO is the lingua franca, so you'll be asked some cursory questions about it.
  • You are interviewed by an engineer who isn't very adept at interviewing. In general, engineers love trivia and complexity, so you will get a lot of questions about polymorphism, inheritance, and GoF design patterns only because these things are interesting to the person interviewing you.
dan
  • 349
  • 3
  • 10
0

Short answer is Yes

The longer version why you feel or in a dilemma as to why its important is only due to the fact that you haven't worked on any projects or implementations which serve some purpose. Its perfectly valid in classrooms to have examples on Automobile then extended to Car, Trucks... but when you jump into software development its a solution targeted to ease of some tasks. Now if it weren't were for OO we would have everyone writing similar codes across the code base or reinventing wheels everyday. Imagine what a mess it would be if one were to dive into such a codebase to fix something. The classroom examples are for a vague definition or representation of how/why it is done. The real test is out when you start building your app, no doubt as anything it could be terribly used but it far out weighs the sanity due to its clear and concise usage. So better start of working on these weak area least you churn out nightmare codes.

Jim G.
  • 8,006
  • 3
  • 35
  • 66
V4Vendetta
  • 594
  • 3
  • 9
0

An object oriented language helps you keep an object oriented design in your code, which is good. But it's also true that such a design can be obtained with any other paradigm language: the reason of the popularity (especially among companies) of OO-languages stands probably in the commercial success of java and c#.

Had Mr. Gates started his company the right way, we would probably be studying SCHEME before applying for a job.

Lorenzo Stella
  • 259
  • 1
  • 4
  • Scheme appeared the same year as Microsoft did (although there were certainly other LISPs before that). Also, Java and C# owe a lot of their success to Alan Kay, specifically Smalltalk, and Simula before that, as well as to the success of C++. – JasonTrue Jul 05 '11 at 06:56
-1

In a word "Yes".

You may think you know the theory, but you need to write some code and put it into practise. There are - literally - thousands of examples and exercises available on line.

Binary Worrier
  • 3,112
  • 2
  • 25
  • 23