• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

Linux vs Unix

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    Linux vs Unix

    Interesting discussion today at $clientco

    Discussion about reliability and scalability, and the argument was that you can not build a service running on Linux that could match the reliability and scalability of Unix (AIX, Solaris, HPUX, etc)

    My argument is that the days of thinking that because individual components have multiple layers or redundancy then the service is reliable are long gone.

    I cited Google and Facebook as examples where reliability is built further up the stack and component failure is expected and thus architected around. Scale out and designing redundancy up the stack are the way forward.

    Discuss....
    Politicians are wonderfull people, as long as they stay away from things they don't understand, like working for a living!

    #2
    I vote for proper Unix - why? My mantra for the enterprise where lets face it, 5k server vs 50k server, what's the difference, is 'one throat to choke'....

    Example one:

    Current ClientCo has IBM POWER7, AIX 6.1, WAS 7, DB2, IBM MQ, IBM Mainframe, Z/OS, if something doesn't work, we try and fix, if not, we call IBM, it's fixed pretty pronto (usually), cos it's all IBM.

    Example two:

    Previous permie job, HP DL580 or something, RHEL Linux, Oracle, couldn't see more than 16gb RAM. Fair enough, box has 64gb so let's 'upgrade the kernel' or HUGE_MEM or something, we upgrade the kernel, kernel can't load the EMC Clarion SAN kernel drivers, cue bitchfest for weeks of HP blaming Redhat, Redhat blaming EMC and HP, EMC denying all knowledge...

    So, it's cutting edge vs stability for me. Plus RH et all change basic stuff so often what worked in one release is gone or somewhere else in the next.

    Finally, Linux/PC will never have the architecture to do proper firmware virtualisation that POWER and T-series SPARC (and HP to a lesser extent) can do, it's just can't.

    Another big telco clientco moved all it's LDAP proxies from Solaris/SPARC to Linix and moved them back, no scalability.

    For me for Linux to work at IBM/SUN/HP levels it needs it's own hardware, not a pimped up PC.

    Comment


      #3
      Originally posted by portseven View Post
      I cited Google and Facebook as examples where reliability is built further up the stack and component failure is expected and thus architected around. Scale out and designing redundancy up the stack are the way forward.
      That's all very well but the trouble is, most places don't seem to put that much thought into the whole architecture/design in my experience. They just bang in cheap as chips "commodity" kit with a 3rd party virtualisation layer and wonder why the servers spend more time on the floor than SY01's pants during his latest bout of gastroenteritis/man-flu/etc.

      Linux has it's place for sure, but when you're talking about high end, absolute mission critical workloads I'm not sure it can compete with proper enterprise UNIX.

      Comment


        #4
        Originally posted by Mr.Whippy View Post
        That's all very well but the trouble is, most places don't seem to put that much thought into the whole architecture/design in my experience. They just bang in cheap as chips "commodity" kit with a 3rd party virtualisation layer and wonder why the servers spend more time on the floor than SY01's pants during his latest bout of gastroenteritis/man-flu/etc.

        Linux has it's place for sure, but when you're talking about high end, absolute mission critical workloads I'm not sure it can compete with proper enterprise UNIX.
        At one clientco we had a SAN go down, been left to it's own devices, old Symmetrix, got it back then backtracked to what was affected from the quite large outage. One was a three node HACMP (AIX Cluster) that had lost said SAN and had a flap, brought SAN back up and all fired up again, system stored and forwarded to catchup, no issue really.

        Back to RCA, SAN was issue but we looked at the AIX three node cluster and all nodes had been up 1531 days, no-one had logged into much after that, it just ran, fell over not it's fault, and recovered anyway. Forgotten boxes, never rebooted.

        Looks like it was installed, left, and thats it. It worked. Linux? No way. Set it and forget it? No way...

        Comment


          #5
          Originally posted by portseven
          I cited Google and Facebook as examples where reliability is built further up the stack and component failure is expected and thus architected around. Scale out and designing redundancy up the stack are the way forward.
          Indeed. ZFS is another good example of moving redundancy up the stack.


          Originally posted by stek View Post
          Previous permie job, HP DL580 or something, RHEL Linux, Oracle, couldn't see more than 16gb RAM. Fair enough, box has 64gb so let's 'upgrade the kernel' or HUGE_MEM or something, we upgrade the kernel, kernel can't load the EMC Clarion SAN kernel drivers, cue bitchfest for weeks of HP blaming Redhat, Redhat blaming EMC and HP, EMC denying all knowledge...
          That's more to do with the fact it's a multi vendor solution than the fact it's Linux though. I've seen the same thing happen with HP, HDS & EMC and on another occasion with Sun & EMC.

          The relative sophistication of virtualisation systems is just a question of maturity. x86 might lag at the moment but it's catching up fast, and for a lot of people it's already "good enough" which makes the savings compelling.
          While you're waiting, read the free novel we sent you. It's a Spanish story about a guy named 'Manual.'

          Comment


            #6
            Originally posted by doodab View Post
            The relative sophistication of virtualisation systems is just a question of maturity. x86 might lag at the moment but it's catching up fast, and for a lot of people it's already "good enough" which makes the savings compelling.
            The supposed savings these days aren't what they used to be when you look at the pricing of the latest generation of IBM's POWER7 kit. They are far more cost effective than the old POWER5 & 6 kit. Some of the lower end kit is almost "commodity" but with a lot of the reliability and scalability of the higher end models, which x86 kit just can't stack up to POWER and isn't likely to any time soon imo.

            Linux isn't catching up, it's always lagging a couple of steps behind imo.. when it does "catch up", proper enterprise UNIX & virtualisation has moved ahead already.

            Comment


              #7
              Originally posted by Mr.Whippy View Post
              The supposed savings these days aren't what they used to be when you look at the pricing of the latest generation of IBM's POWER7 kit. They are far more cost effective than the old POWER5 & 6 kit. Some of the lower end kit is almost "commodity" but with a lot of the reliability and scalability of the higher end models, which x86 kit just can't stack up to POWER and isn't likely to any time soon imo.

              Linux isn't catching up, it's always lagging a couple of steps behind imo.. when it does "catch up", proper enterprise UNIX & virtualisation has moved ahead already.
              Still think its cheaper and if done right more reliable

              For example, Power7 is costed at approx £17K per CPU, and a standard 16 core AMD based rackmount is circa £6K which if you compare the spec.org ratings is equivalent to circa 5.4 Power7 CPU (assuming 65% utilisation)

              So for the half the cost of a 5 CPU Power7 LPAR (about £85K) you could build a nice resilient cluster of 4-5 x86 servers that will give you way more CPU grunt that the Power7 box...

              Comment


                #8
                Originally posted by Mr.Whippy View Post
                Linux has it's place for sure, but when you're talking about high end, absolute mission critical workloads I'm not sure it can compete with proper enterprise UNIX.
                I don't think that Oracle would be pushing OEL as much as they do if that were still true.
                Best Forum Advisor 2014
                Work in the public sector? You can read my FAQ here
                Click here to get 15% off your first year's IPSE membership

                Comment


                  #9
                  Originally posted by TheFaQQer View Post
                  I don't think that Oracle would be pushing OEL as much as they do if that were still true.
                  Of course they would! Ellison and his crew never have his customers best interests at heart... only profit.

                  Clearly I've never worked in an environment where Linux has been implemented right, because in my experience Linux solutions have never stood up quite as well as proper UNIX.

                  I once worked somewhere that ditched a large Oracle on Linux implementation after it couldnt get any sort of stability, RAC would just bomb out and halt nodes/database for no apparent reason and neither RedHat, Oracle, VMWare or the x86 intel kit manufacturer could work out why. Uptime never reached more than a week. They eventually dropped it after spending a few million and moved over to an AIX solution which went in in a quarter of the time and has uptime numbering hundreds of days.

                  I personally think that Linux gets badly implemented because it's so easy for any total f'wit to download and install and then claim they're "experts" with no real/proven experience within an enterprise environment, then they go out and get a job with this so called "expertise".

                  Comment


                    #10
                    There was 'Oracle Unbreakable Linux' too, what a joke that was. AKA 'Oracle Totally-breakable Linux'...

                    Comment

                    Working...
                    X