• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "Who dual boots Windows * with some sort of Linux"

Collapse

  • petergriffin
    replied
    Ping Sysman

    Originally posted by Sysman View Post

    Do not try to use Gparted on NTFS boot partitions. As of a year or 18 months ago it didn't understand NTFS disk geometry properly and a subsequent boot would result in some pretty lengthy CHKDSK runs. Shrink it as far as it will go from within Windows. To shrink further you will need to mount it as a non-system disk in another instance of Windows or buy other software which is designed to deal with NTFS in its latest incarnation.
    I was set for shrinking natively in Windows with Administrative Tools --> Disk Management tool -> Shrink Volume, when I read this:
    https://help.ubuntu.com/community/Ho...ons#Defragging
    If you are planning to use GParted, you can skip defragging and save yourself some time, because GParted can resize an NTFS partition safely regardless of its state of fragmentation.
    and:

    GParted Partition Editor

    If you decide to use GParted, you have to remember to uncheck the 'round to cylinders' checkbox , otherwise GParted will dutifully move the entire partition to align it with cylinder boundaries. Unfortunately this takes a long time, and when it's finished, usually results in booting problems. This is because the Windows boot loader depends on block addressing to find parts of itself, so when the partition is moved a little, it gets all mixed up and disjointed. Sometimes it can fix itself automatically but other times it requires repairs from the Windows Installation Disc. If you just remove the check mark you will find that GParted will be able to complete the NTFS resize in a fraction of the time it would have taken otherwise and afterwards Windows will boot just fine.
    Is it possible that you forgot to uncheck that option? And did you manage to rescue the partition?

    Leave a comment:


  • leapFrog
    replied
    You don't start your laptop using BIOS or UEFI; you start your operating system.

    If you have installed Windows using UEFI then you can then install Linux with rEFInd, using the Windows EFI partition.

    Leave a comment:


  • bobspud
    replied
    Originally posted by petergriffin View Post
    I've used (not owned) NeXt workstations back in the swinging '90s, so I am familiar with the concept. I don't rule out I will ever buy a Mac in the future but I need access to some tools that don't work on the Mac (Grub, Lilo, Sys-v-Init scripts, iptables) and I need to test them on the bare metal.

    The beauty and also the limitation of virtualization is that many of the above mentioned tools can be implemented on the host machine without configuring the guest. You don't need a kernel to run a guest, you don't need to configure a firewall, you can get away with initialization scripts, etc.

    While this is a great thing if you need a rough and ready system to go, it doesn't give you a clue if the system is broken until you test it on a real machine.

    Up until 3-4 years ago I considered myself an expert because I had installed countless Linux distros on VMs, only to discover later that the configurations didn't work on the bare metal (different drivers, disk geometry and so on). I've learnt my lesson.
    Sounds like you are doing something wrong if you are having driver issues just because you are swapping out an emulator. I have never seen a problem like that. Normally the worst you will get is that you have NO internal disks and need the FC-HBA to load first or a god ******** awful dell PERC raid card that does need drivers added to the distro (or used to) the only way you can find that sort of stuff is buying servers on ebay then shipping them on when you are happy that you know stuff about them...

    The best way to learn Linux is to start with Gentoo or Debian and then install the minimum image and build up the server from that point. A few years back I got to a level where I could knock up a postgres mail server with no more than about 100 meg of packages

    I am really fed up with being an Architect at the moment so I am re learning my old skills. Today I am awaiting a book on chef and a ruby programming guide
    Next week I will mostly be building MAS stacks for fun

    Leave a comment:


  • petergriffin
    replied
    Originally posted by bobspud View Post
    Buy a Mac Book Pro (Here is your unix OS for your bread and butter, although as an ex solaris engineer you are scaring me that you didn't know that Macs run unix...)
    I've used (not owned) NeXt workstations back in the swinging '90s, so I am familiar with the concept. I don't rule out I will ever buy a Mac in the future but I need access to some tools that don't work on the Mac (Grub, Lilo, Sys-v-Init scripts, iptables) and I need to test them on the bare metal.

    The beauty and also the limitation of virtualization is that many of the above mentioned tools can be implemented on the host machine without configuring the guest. You don't need a kernel to run a guest, you don't need to configure a firewall, you can get away with initialization scripts, etc.

    While this is a great thing if you need a rough and ready system to go, it doesn't give you a clue if the system is broken until you test it on a real machine.

    Up until 3-4 years ago I considered myself an expert because I had installed countless Linux distros on VMs, only to discover later that the configurations didn't work on the bare metal (different drivers, disk geometry and so on). I've learnt my lesson.

    Leave a comment:


  • bobspud
    replied
    Originally posted by stek View Post
    MacOSX is BSD based, and there are those of us who decry Linux as not Unix but Unix-like. And again, there are those who say AIX is not a 'proper' Unix since they break the 'everything is file' paradigm - the AIX ODM is a Berkeley DB I think....

    I do feel Linux is a simpler concept to proprietary Unixes, since with IBM/AIX, PA-RISC/HP-UX and Sun (Oracle)/SPARC there's so much you can do with the hardware, LPAR/LDOM/vPAR etc, all done in firmware not an extra layer as per ESX etc...
    ^ this ^

    Mac OSX was the best bits of next step with an even better UI.

    I have not found anything that I need a Linux box to do for me that can't be handled by OSX. If I want to dick around with a cluster or something like that I can spawn an aws instance. (From my mac CLI coz that works too)

    Leave a comment:


  • stek
    replied
    Originally posted by d000hg View Post
    MacOSX isn't Linux though, it's just Linux-like. Many things about it are different.

    I don't know the fact you use Linux all the time means you need Linux to be the native OS though.
    MacOSX is BSD based, and there are those of us who decry Linux as not Unix but Unix-like. And again, there are those who say AIX is not a 'proper' Unix since they break the 'everything is file' paradigm - the AIX ODM is a Berkeley DB I think....

    I do feel Linux is a simpler concept to proprietary Unixes, since with IBM/AIX, PA-RISC/HP-UX and Sun (Oracle)/SPARC there's so much you can do with the hardware, LPAR/LDOM/vPAR etc, all done in firmware not an extra layer as per ESX etc...

    Leave a comment:


  • d000hg
    replied
    MacOSX isn't Linux though, it's just Linux-like. Many things about it are different.

    I don't know the fact you use Linux all the time means you need Linux to be the native OS though.

    Leave a comment:


  • bobspud
    replied
    Originally posted by petergriffin View Post
    Linux/Unix is my bread and butter. I need to have it running on the bare metal. I would dual boot or virtualize just to run the occasional Office.
    Now read what I just advised you to do again...

    Buy a Mac Book Pro (Here is your unix OS for your bread and butter, although as an ex solaris engineer you are scaring me that you didn't know that Macs run unix...)
    Now use Bootcamp to dual partition your Windows OS
    Now if you want you can run a vmware or parallels hypervisor to run the Windows partition on the MAC-OSX primary..

    Why is that so hard?

    Leave a comment:


  • petergriffin
    replied
    Originally posted by bobspud View Post
    You are doing it the wrong way round...
    Linux/Unix is my bread and butter. I need to have it running on the bare metal. I would dual boot or virtualize just to run the occasional Office.

    Leave a comment:


  • bobspud
    replied
    You are doing it the wrong way round...

    Buy a mac book pro (750gb drive)
    Buy parallels or VMware fusion

    Install Windows <whatever> into a Bootcamp partition and then point you VM software at the bootcamp partition and it gives you an option to either run windows natively or inside a VM.

    Now you have a unix OS for doing what you need and windows in a box to run crap like visio...

    Leave a comment:


  • css_jay99
    replied
    as spacecadet says.

    visualization is certainly the way to go and virtualbox is free.

    I have a 16GB Mac mini server ruining a couple of windows2003 server vm's ...


    css_jay99

    Leave a comment:


  • Sysman
    replied
    Originally posted by petergriffin View Post
    Do you mean 50GB free space or 50GB in total? On my laptop WIndows 8 takes 30GB, not counting the three (!) recovery partitions.
    I mean 50GB for the Windows installation, including hidden partitions. At the moment something like 16GB of that is free.

    I then have a spare partition of 50GB on the same virtual disk in case I want to expand it in the future.

    If you copy the lot to a different disk to (e.g. to expand it when you run out of room), that triggers the Windows 8 activation nonsense, which is a pain.

    Originally posted by petergriffin View Post
    I'll probably do the opposite. I'll boot from Linux and mount the Win partition on Qemu-kvm.
    I have not tried the qemu-KVM route so can't say how well it works.

    Leave a comment:


  • portseven
    replied
    i would run linux as the base and just use kvm to create a windows vm (or more)

    use kvm quite happily on my hp microserver, run a couple of windows vm's in there (domain server, dhcp, etc)

    considering getting one of them dell xps 13 developer edition laptops, not a bad price for a nice linux lappy

    Leave a comment:


  • petergriffin
    replied
    Originally posted by Sysman View Post
    Go for 50GB at least, preferably 100GB.
    Do you mean 50GB free space or 50GB in total? On my laptop WIndows 8 takes 30GB, not counting the three (!) recovery partitions.
    Originally posted by Sysman View Post
    As others have said, your best bet is to stuff either Windows or Linux into a virtual machine. No more fights with one OS stomping on the other's boot setup. Virtualisation really is the way to go here.
    I'll probably do the opposite. I'll boot from Linux and mount the Win partition on Qemu-kvm.

    Leave a comment:


  • Sysman
    replied
    Originally posted by petergriffin View Post
    The method explained would only work if the distribution came shipped with Microsoft approved secure verification, which is not my case.

    I would like to have an opinion on much free space to leave on the Windows partition, considering I will only use it in an emergency, eg some unthinkable application that can only work in Windows (the only one I could think of is MS Office, in which case I have plenty of horse power at work).

    What about 20 GB? Would that be enough?
    Go for 50GB at least, preferably 100GB. My Windows 8 system is sitting inside a VMware Fusion virtual disk of 105 GB. I managed to shrink that from within Windows so that the Windows partition is currently 48 GB and the second partition (currently unused) is 57 GB, so I have plenty of room for expansion. In a typical virtual machine setup, real disk space isn't used until it's written to; my Win8 instance is only taking up 41 GB of real disk space at the moment, and that Linux instance which has a 100GB disk is only using 20GB real space.

    With Win8 Pro, the Express editions of SQL and Visual Studio, plus the usual tools like OpenOffice, Cygwin, Firefox, VLC etc, my 48GB Windows 8 partition only has 16GB free.

    Originally posted by petergriffin View Post
    Then I have to understand if it's better to shrink the partition from within Windows or from Linux with Gparted. If anybody has done it before, please give me a shout.
    Do not try to use Gparted on NTFS boot partitions. As of a year or 18 months ago it didn't understand NTFS disk geometry properly and a subsequent boot would result in some pretty lengthy CHKDSK runs. Shrink it as far as it will go from within Windows. To shrink further you will need to mount it as a non-system disk in another instance of Windows or buy other software which is designed to deal with NTFS in its latest incarnation.

    As others have said, your best bet is to stuff either Windows or Linux into a virtual machine. No more fights with one OS stomping on the other's boot setup. Virtualisation really is the way to go here.
    Last edited by Sysman; 21 August 2013, 11:49.

    Leave a comment:

Working...
X