• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

Why I think OO is flawed

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    Why I think OO is flawed

    This is taken from the General forum at the request of oaksoft. I've collated my arguments from all the posts below:

    Quote:
    --------------------------------------------------------------------------------
    Niall, could you explain your reasoning behind your OO flawed comment. Being a relative newbie to the OO world I would find the observations of someone like you interesting.
    --------------------------------------------------------------------------------

    Sure.

    My main problem with OO is mostly how it is presented. It is usually presented to novices as the pinnacle of software design that should be used wherever possible, and thus ignoring that often it is a sub-optimal method of design. Let me give you an example.

    In pure OO design, everything is represented by some kind of object and the program is a set of inter-related objects. Objects are conceptualised, designed and named in human terms eg; a button, a window, a file etc. Mostly, what is to the human is also to the computer because a lot of these objects are human-convenience abstractions.

    However, what is convenient to a human is often not to a computer. Inefficiencies multiply as layer upon layer of abstraction is forced against the natural way of functioning of a computer. Thus, as any CS student or teacher knows, a naive approach to OO design in purely human terms runs terribly inefficiently - often unusably so.

    In fact, one could say that the experienced OO designer has learned where to depart from pure OO in order to write working code. The more experience they have, the more subtle and complex departures from "correct" OO they do. And best of all, 90% of experienced OO designers don't even realise they do this.

    I prefer to look at OO as a limited tool. In many places true it is the optimal solution. In many others, a functional approach is better. In still others, one might find procedural code the best. My point is, OO is not some uber-technology - its optimal usage is much more limited and the optimal design combines OO with many other design techniques at once.

    Indeed, this is why I often say you should view OO as a format for conveniently laying out your code in a maintainable fashion. I write my assembler in an OO format for maintainence and extensibility, and before I read any snide comments on this I suggest you go read my assembler before you do (look at NedHAL). It has constructors, destructors, instance data etc.

    Obviously, not using a uniform OO design methodology comes with a price - mostly that other programmers get confused when working with your code. Depending on your environment, it may in people-cost terms be better to use pure OO all the way through. However, I would argue that in many projects, any newbie OO engineer will have the same problems with OO code written by a very experienced engineer - for precisely the reasons listed above.

    Lastly, I'll give an example of some coursework my class had to do during my compsci degree. It was to write a program which read from one large file and spat records into three or four other files based record content. The typical class effort took between twenty and thirty seconds to complete, whereas mine took less than half a second. The difference? They used a pure absolutely correct OO design whereas I tailored mine (ie; broke purity) to how the computer actually works.

    And that, in a very small nutshell, is why I think OO is flawed. Not in itself inherently, but in how it is presented, used and marketed.

    --- cut ---
    XML is flawed because it's OO, and thus my previous explanation about the flaws in OO apply.

    Furthermore, it is built on top of existing paradigms without renovating them, requires a large and complex decoding engine (already a sign of something wrong with it eg; SGML) and does nothing to prevent introduction of incompatible tags (eg; like how Netscape kept extending HTML to stop sites being compatible with anything except their browser). While it's a useful technology which certainly makes some things much easier, Tornado is a completely different beast.

    --- cut ---
    I don't know much about web services, but I still hold that XML uses an OO approach. To work with XML, you use a DOM and it's most certainly OO in every implementation I've seen. It also has all the problems I've mentioned in previous posts, and I don't see it being the technology to link disperate code together. Why? Further problems I see is its text-based format (unsuitable for high-performance areas), lack of flexibility in encoding (much more code needed, inefficient, can have difficulty encoding exotic formats) and lack of power in the model itself (it could do with being programmable ie; scripting).

    --- cut ---
    Perhaps you already view my position as being absolutely correct and thus can't see the point I'm making? My point is that there is a myriad of approaches and instead of us trying to impose one over all others (eg; MS .NET, Corba) we should be aiming for a truly agnostic method of joining bits of code together. Tornado is my version of that agnostic approach, where solutions written with any variety of approaches work equally hand-in-hand with one another.

    I can't find the relevant post on your suggestion for a solution to the problems of OO.
    Basically, I propose making all code modular and reusable to a tiny grain (eg; a module to generate random passwords, or a module generating the current date etc) and linking them together in a far more generic fashion than OO. This is obviously like .NET or Corba, but they're based around objects whereas Tornado is not - it's based around processors of data with their inputs and outputs connected by streams of data. This has elements of UML clearly but more in its abstract sense as all UML implementations that I know of are OO.

    One clear difference in Tornado is that there is no fixed API at all between the "objects" nor any fixed naming of "objects" (eg; you don't have to specify which data processor, the system allocates you one most of the time - thus permitting complete substitutability of data processors).

    Comments?

    Cheers,
    Niall

    #2
    Are you effectively taking about a library of components whose construction can be either OO, procedural etc as best suits the application or am I misunderstanding entirely?

    Either way, you say there is no fixed API between data processors and you don't need to specify which data processor.
    How do you configure them to work together?
    Would this be done at run time, coding time or design time?

    A more detailed explanation of what you mean by data processor would help.

    Bear in mind that you should describe your idea to me as you would to a 4 year old child. I'd rather have my hand held for a few hours to get a firm understanding than spend ages trying to guess precisely what is going on and fail.

    Comment


      #3
      The first part of this seems to be a bit of a statement of the bleedin obvious. Code to suit the application.

      Currently I am doing something which is not processor intensive, and maintainability is paramount. Not just OOD but all the other hideously inefficient things like using meaningful long strings rather than integer flags.

      However, in past doing real time process simulation, when it has been very difficult to get the model to run in real time, all of this nicety has had to go out of the window, and often used in line code. We did not code directly, we used a tool, basically a glorified macro processor, to expand parameter list descriptions of fairly basic code blocks.

      It is also very easy to bolt a CAD type graphical front end on to a processor of this type, still have VB source code somewhere. Various commercial packages, used to be one called Xanalogue, do the same thing although they tend to be a) Expensive and b) Often unecessarily specific to a particular application like simulation.

      Comment


        #4
        Niall take a look at

        TableOrientedProgramming

        and

        oopbad

        TOP

        This guy has been refining his ideas for nearly a decade.

        If your ideas are to catch on you need to publish details and examples for public perusal. No big programming idea ever caught on that was shrouded in secrecy.

        Paul C.

        Comment


          #5
          Are you effectively taking about a library of components whose construction can be either OO, procedural etc as best suits the application or am I misunderstanding entirely?
          No, you're quite correct. What I'm doing is to make the relations between these blocks of code freeform.
          Either way, you say there is no fixed API between data processors and you don't need to specify which data processor.
          How do you configure them to work together?
          Would this be done at run time, coding time or design time?
          It knows because of a number of different reasons, the biggest one being available routes of data flow. After that comes learned heuristics, user specification and finally yes, you can specify a specific component.

          What you must bear in mind is that the traditional gap between using a computer and programming it is very much blurred. Thinking back to using a home computer in the 1980's and you'll have a very good idea. A typical user experience would involve lots of dragging and dropping (more than any current system), lots of pressing of key shortcuts which you've assigned macros to, dragging stuff round UML diagrams and writing short sections of Python. That's use, not development. Obviously, it'll be techie-only.
          A more detailed explanation of what you mean by data processor would help.
          Err, literally anything which can process data. There are two things in the entire system in Tornado - data, and a data processor. A disc drive would be data for example, as would files or general information. A processor is everything else - the user, a piece of code, the printer etc. Interestingly, the screen is considered data.
          Bear in mind that you should describe your idea to me as you would to a 4 year old child. I'd rather have my hand held for a few hours to get a firm understanding than spend ages trying to guess precisely what is going on and fail.
          Ok, I'll try an example. Tornado is built around schemas which really are glorified macros (templates would be a better term). You can attach them to keys, events, anything - in fact, the type mapping system is just a set of schemas. Schemas are inferred (all possible type conversions known by the system), user specified (literally written/selected by hand) and prioritised based on past experience (the system adapts to your actions and system characteristics).

          Ok, consider that there are two machines, one fast and one slow on a network. You have a MPEG4 movie on your slow machine and you'd like to watch it, but your machine isn't fast enough. Right now, you'd be stuck - but on Tornado, it would realise that MPEG4's with the given bitrate won't work on the local machine, so via its known schemas it'll outsource its conversion into something which will play on the local machine. For example, it might invoke a MPEG4 decoder remotely on the fast machine, plug that into a MPEG1 encoder on the same and then that into a MPEG1 decoder on the local machine. Problem solved, and you only needed to double click the file. Note that at any stage, you can intervene, alter quality settings, or make it use some other machine etc.

          The thing to remember is that Tornado knows nothing about MPEG encoders and decoders. It does know the local machine isn't fast enough (through its performance monitoring and past experience), it also knows of a path of type conversion to something also of type Data=>Movie which can play on the local machine. If the user has never used this route, it may ask for permission - but if they have, it'll just do it silently. It also knows if the user overrode its default choice (and so will tend to do what the user forced last time).

          Something else - this entire process is 100% customisible. By default it carries out what's good for most situations, but you can very easily drop in a replacement for that or any particular conversion which might suit you. It's effectively a completely script driven system.

          Also, it doesn't have to be MPEG's - it can be ANY type of data eg; you can read a Word file on a Linux box because it'll outsource the decoding to a Windows box. It's also completely compatible with all existing systems - COM, CORBA, XML etc so whatever any of them have, it automatically becomes part of the whole and available to the whole. The entire internet can be added to Tornado - so literally every web page is just data, or indeed a processor of data (eg; a form).

          What's paramount in programming for Tornado is interoperability. You rarely need to specify anything more than the type of data on an input or output because the system does that for you - so Tornado is 75% generic programming. Where you do specify a specific component, it need not be on the local machine - it can be pulled from a trusted other machine, or can be invoked on a nontrusted machine. Or indeed, you can write an alias component that simply pretends to be the specific one, invoking an alternative implementation to do the processing.

          You'll probably not get that, but it's really hard for me to explain because I'm so close to it (I'm that sad I dream of it most nights). Please ask questions. If I can get anyone other than I to understand, I'd whoop for joy!

          Cheers,
          Niall

          Comment


            #6
            Niall take a look at

            TableOrientedProgramming
            Heh, he has the same idea as me! I've used that idiom for ages now.
            oopbad
            Excellent stuff! He goes a bit too far against OO, but I would absolutely agree with his principles.
            TOP
            It's very interesting how in my conversation with Marshall Cline, he advises precisely the same techniques. I bet any serious OO programmer would too.
            If your ideas are to catch on you need to publish details and examples for public perusal. No big programming idea ever caught on that was shrouded in secrecy.
            There's two answers to this:

            First is that there are countless examples of small guy inventing something, selling it and big nasty company patenting his idea and suing him for infringement. It doesn't matter if small guy had prior art, he has to find several million dollars at least for legal fees. Now that software patents are to become legal all over the world, you must patent every software idea you can before discussing it openly. Otherwise, since there's not been a non-disclosure agreement signed, others can patent it on you and you're sunk - and that's why 40,000 of my 64,000 needed is just for patents. I'd be fine if I wanted to GPL my ideas, but I don't - I believe in the right to make a living from your code, not just servicing your code.

            Secondly, I have tried many many times to explain my ideas and failed. I have even put working code in front of people and they still fail to get it. I had privately concluded the only way I'm going to get funding is to get investors to trust me because I don't think anyone else is able to understand. This isn't unknown, most investors I've seen tend to work on probable risk vs. return, and therefore trust and expert advice is what they use.

            Ultimately, I'll probably just need to write it myself until enough is there for people to learn the paradigm. Then go from there really.

            Cheers,
            Niall

            Comment


              #7
              OK Let's try another angle.

              I think I am beginning to understand what you are talking about but you are not really analogising enough for those not deeply involved to truly grasp the concept.

              I am currently considering the concept of Intelligent Drivers.
              Basically this grew as a result of my utter frustration that every time I buy new hardware I need a new "driver".
              Why should I need a driver?
              Why can't the PC "talk to the installed card", find out what it can do and "learn" how to drive it.

              The PC could be taught to do this over time from its experiences on other hardware and also pre-programmed hardware details.

              In short, it works out what hardware is on the attached card and figures out what it can and can't do (obviously this is no minor problem) and learns how to "drive it".
              Over further time it would learn how to drive it better and more efficiently.

              So....where does that leave us.

              I suspect this is analogous to what you are talking about.
              Basically there is the initial database of characteristics, software library "components" and basic common known interface protocols like ethernet.
              No code beyond this initial library needs to be written.
              The rest is effectively "learned" by the app and added to using heuristics which are different between apps.
              Therefore one app could use the same code functionally but a fast version whereas another could use a more maintainable version for wide compatability.

              Am I close?

              Comment


                #8
                Re: OK Let's try another angle.

                I think I am beginning to understand what you are talking about but you are not really analogising enough for those not deeply involved to truly grasp the concept.
                Yeah, sorry. Please believe that I am trying. I'd even recommend going and reading up on eastern spiritual thought, because there's a lot of that in this.

                I'm not being facetious (sp?) here - I mean in the way that asian thought views the universe as a self-organising web of inter-related things, which become more and more inter-related as you examine closer (and hence towards quantum mechanics, the most inter-related you can get).

                I'm applying the same to software - make it self-organising and inter-related down to a tiny level. This is obviously the first step on the path, because a working quantum computer would most definitely need to be self-organising in order to maintain coherence.
                I am currently considering the concept of Intelligent Drivers.
                Basically this grew as a result of my utter frustration that every time I buy new hardware I need a new "driver".
                Why should I need a driver?
                Why can't the PC "talk to the installed card", find out what it can do and "learn" how to drive it.
                Acorn did this on RISC-OS back in 1989 in terms of drivers coming built in to the card. You literally plugged in the card and it just worked. Intel have been making noises about this for the upcoming replacement for PCI, but we'll see yet.
                The PC could be taught to do this over time from its experiences on other hardware and also pre-programmed hardware details.

                In short, it works out what hardware is on the attached card and figures out what it can and can't do (obviously this is no minor problem) and learns how to "drive it".
                Over further time it would learn how to drive it better and more efficiently.
                I've had similar thoughts too, but you'd need a hell of a lot more processing power - plus, crucially, a way of the computer getting feedback from its outputs so it can correctly configure the neural net. Currently, no software knows what the user actually sees nor can it know.
                I suspect this is analogous to what you are talking about.
                Basically there is the initial database of characteristics, software library "components" and basic common known interface protocols like ethernet.
                No code beyond this initial library needs to be written.
                The rest is effectively "learned" by the app and added to using heuristics which are different between apps.
                Therefore one app could use the same code functionally but a fast version whereas another could use a more maintainable version for wide compatability.

                Am I close?
                You're getting there. The key to understanding Tornado is that it isn't a program or code or a way of doing things - it's a philosophy. I can write down its basic precepts (which are on my web site) and everyone says (correctly) it's not a good advert for investment because it's far too fuzzy. This I concluded after an exhausting round of applications - I cannot explain this to anyone who doesn't understand systemic theory because conceptually, they won't be able to get it (Fritjof Capra does plenty of good books summarising this theory and its applications - I'm just applying it to software).

                Perhaps another tack - it's effects on code. Code written for Tornado is utterly reusable, in fact, it's very hard to not make it reusable. It's also automatically distributed - across threads, processors and machines - but you need not care in your code past making it thread-safe. You also need not nor should not care about anything outside your component - not even its API. You can use any style of solving the problem you feel like eg; functionally writing it in Haskell (which I always find "fun") and it will have absolutely no impact on any other code because in fact it cannot.

                There is also no efficiency penalty to this because you can use freeform data structures - data can be in any format whatsoever. Issues such as endianess have been made obsolete because that's taken care of for you (so Motorola and Intel now work together seemlessly). Transport of data from A to B whether it be between threads or across the planet is no longer your concern.

                I suppose one could look at Tornado as being the most flexible and efficient framework you can have - so like .NET, just without a lot of the manual work. No more common internet controls for example. No more differentiation between a keyboard and a printer for example.

                I also made RAM obsolete - there is no longer the concept of loading a program or file in or indeed having a program running. RAM is now a cache of the hard drive - which it was heading anyway with virtual memory - I just take it all the way.

                Basically, it's a massive simplification of using a computer - there are about five core rules of use, and everything follows them logically based on those. So the learning curve is very low - just get the rules, and the complexity and power follows from there based on combination of those rules.

                That any better?

                Cheers,
                Niall

                Comment

                Working...
                X