• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

VMware ESX 4.1

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    VMware ESX 4.1

    So last night I waisted 7 hours of my life that I will never get back because some twunt specified an ESX host with 64 GB of memory and 43GB of local drive space. The clowns that administer the servers have gone nuts with RDM's all over the place and so moving the server changed from a little click on the vstorage migrate button to LUN maps are us catastrophe. Im fuming that I just had to waist 60GB of SAN LUN just to mount a datastore large enough to open the VMkernel swap files across my SAN Fabric.

    Whats others perceived wisdom for placement of swap files? If I was building a Unix Host I would always include cheap local disks that were big enough to cater for all the swap in the world then boot from San. Last place I would want swap files was clogging up my fabric while sitting on a raided mirrored LUN

    #2
    Swap files should always be local. On top of the volume of fabric traffic, one badly swapping system has the potential to keep the SAN busy when it should be performing other tasks.

    Dump files should be local too. A sudden dump of many Gigabytes of RAM ain't gonna help your SAN performance, and in the event of the dump occuring due to a lack of resources, doing it locally increases your chances of getting a valid dump.

    As to algorithms which create swapfiles as some multiple of RAM, that always seemed a daft notion to me. Surely if you have gobs of RAM you are less likely to need swap space. Conversely if you are short of RAM then large swap files are in order. Of course it depends how RAM maps to swap file here.

    I forget the exact algorithm used by OS X, but it only allocates what is needed for swapfile space and will reclaim that space when the memory intensive processes which caused it to expand are terminated.
    Behold the warranty -- the bold print giveth and the fine print taketh away.

    Comment


      #3
      Originally posted by Sysman View Post
      Swap files should always be local. On top of the volume of fabric traffic, one badly swapping system has the potential to keep the SAN busy when it should be performing other tasks.

      Dump files should be local too. A sudden dump of many Gigabytes of RAM ain't gonna help your SAN performance, and in the event of the dump occuring due to a lack of resources, doing it locally increases your chances of getting a valid dump.

      As to algorithms which create swapfiles as some multiple of RAM, that always seemed a daft notion to me. Surely if you have gobs of RAM you are less likely to need swap space. Conversely if you are short of RAM then large swap files are in order. Of course it depends how RAM maps to swap file here.

      I forget the exact algorithm used by OS X, but it only allocates what is needed for swapfile space and will reclaim that space when the memory intensive processes which caused it to expand are terminated.
      Glad I am not going nuts... Local use of disks for this stuff seems to me to be basic. Its obviously an old skool/Unix thing, Because every storage or unix architect that I have spoken to has agreed while the wintel lot just stare into space... I had a rant at the guys that built the systems and they don't seem to think that letting SQL and windows swap all over their fabric was a bad thing

      But then their VMware platform is worse than something a horse dropps out its BUM!

      Thanks

      Comment


        #4
        Good in theory, but remember that ESX allows you to do things like switch host hardware on the fly, and you can only do that off ALL your files are on the SAN.
        World's Best Martini

        Comment


          #5
          Originally posted by bobspud View Post
          Glad I am not going nuts... Local use of disks for this stuff seems to me to be basic. Its obviously an old skool/Unix thing, Because every storage or unix architect that I have spoken to has agreed while the wintel lot just stare into space... I had a rant at the guys that built the systems and they don't seem to think that letting SQL and windows swap all over their fabric was a bad thing

          But then their VMware platform is worse than something a horse dropps out its BUM!

          Thanks
          No you are not going nuts. While my observations go back to when we were running clients booted from a server over 10Mb/s ethernet, it still makes fundamental sense today,

          v8gaz's comment might put a bit of a spanner in the works, if that is a requirement.
          Behold the warranty -- the bold print giveth and the fine print taketh away.

          Comment


            #6
            I'm pretty sure you need shared storage for vMotion. You could always run two guest OS in an old fashioned cluster.
            While you're waiting, read the free novel we sent you. It's a Spanish story about a guy named 'Manual.'

            Comment


              #7
              Originally posted by v8gaz View Post
              Good in theory, but remember that ESX allows you to do things like switch host hardware on the fly, and you can only do that off ALL your files are on the SAN.
              Thats a good point that I hadn't thought of for a well built system, however in my case we had to do the move the hard way any how. (shutdown the machine move luns on the SAN then create a new empty vm and mount the luns again)...

              Originally posted by doodab View Post
              I'm pretty sure you need shared storage for vMotion. You could always run two guest OS in an old fashioned cluster.
              I think I would prefer a cluster with anti affinity rules for a service that just has to always be there. At the end of the day v-motion can move your guest if the host dies but if the guest corrupts or BSODs on you the only thing you will get back on the other side is another perfectly motioned BSOD

              I found this linky that seems to identify the issue we had with the restart so If the servers had of been configured with enough local disk space compared to memory. when we moved the machine it would have been a lot easier, because the swap files would have had space to be recreated.

              Impact of host local VM swap on HA and DRS | frankdenneman.nl


              So reading around the subject. My comment would be when building or designing vm hosts

              local diskspace = memory + some for luck
              swap stays on the machine (doesn't matter because you lose transactions when using DRS or HA anyway and FT will use an identical machine somewhere else to keep your app in step.)
              OS and DATA partitions on the SAN
              HA & DRS will result in lost transactions so double up and old skool cluster your important apps with rules to keep them off the same hosts.
              and if you are doing something really big include some SSD storage in the budget for tier 1 apps.

              Comment


                #8
                There is more to think about here because there are multiple layers. The OS and hence the OS swap file will be on the SAN, wherever you put the VM swap file.
                While you're waiting, read the free novel we sent you. It's a Spanish story about a guy named 'Manual.'

                Comment


                  #9
                  Good in theory, but remember that ESX allows you to do things like switch host hardware on the fly,

                  Comment


                    #10
                    Enterprise VMWare Design

                    As I have designed many VMWare installations, here are my top tips...

                    1/ First - before even thinking of the design, assess which candidates are best for VMWare. The ideal candidates underutilise their 'physical hosts'. If any candidate requires more than 16GB RAM or 4vCPU - think seriously about not putting it on VMWare. Would a physical box be better.
                    2/ If you are designing to make best use of HA and DRS - all files must be on a SAN. You really don't want anything locally on VMWare cluster members apart from ESX itself.
                    3/ VMWare systems can't be designed in isolation. You are right about swap files. Anything larger than 4GB is a cause for worry and a waste of disk space. Also disable disk dumps. If an application/SA needs to be able to dump to disk - it's not a good candidate for VMWare and explain it to them.
                    4/ Make sure you have a good storage design for the back end using 'proper' storage arrays such as EMC DMX4 or higher (for Enterprise installations. I realise most places can't afford this sort of expense but then be aware of the limitations). Mid range arrays are ok but they have problems for large amounts of sustained data writes. You may think this is not the case, but imagine (as happened to me) that someone decides to host all their DEV Oracle databases on VMWare. Sounds good. Low utilisation etc. Then imagine, without telling you, they want to refresh these databases nightly - each database approx 400GB. The amount of streaming traffic will then completely overwhelm the front end processors of the mid range arrays (I won't go into the technicalities but trust me that it's a pain to find out).
                    5/ Go for cores rather than clock speed. In this instance, AMD is probably the better product.
                    6/ Finally - and importantly - go back to point 1. Think carefully about candidate admission. VMWare licenses are very expensive. SAN storage is very expensive. SAN fabric ports/switches are very expensive. Set your admissions criteria properly and everything will be great!

                    Comment

                    Working...
                    X