• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "Db table with most records you have worked with"

Collapse

  • PAH
    replied
    Sounds like a lot of people have very inefficient data storage methods.

    There's only so many unique letters and numbers.

    Leave a comment:


  • ThomasSoerensen
    replied
    My planB db reached 190gb at the last customer analysis - and that is without the transaction log - had to delete a lot of other data on the server to make room for it.

    Leave a comment:


  • NotAllThere
    replied
    I once worked with a VB developer, interfacing his application to SAP. He boasted that his database was approaching 1MB in size. At some time I casually pointed out that the GL table alone was 31GB.

    Leave a comment:


  • Lockhouse
    replied
    One of my systems has a database with c1.5 billion records, all small records though. Looking at it is on my list but as it's working well at the moment it's not a priority.

    Leave a comment:


  • doodab
    replied
    Originally posted by sasguru View Post
    Not to mention more profitable for you, even if you worked as the janitor.
    Originally posted by AtW View Post
    A US President visited NASA one day, saw a janitor working feverishly at sweeping the floor, asked him
    what he was doing and received the reply, "Because I'm working to put a man on the moon"
    So to extend sas' example you would be working to put pointless bulltulip on the internet. Doesn't have quite the same ring to it.

    Leave a comment:


  • AtW
    replied
    Originally posted by sasguru View Post
    Not to mention more profitable for you, even if you worked as the janitor.
    A US President visited NASA one day, saw a janitor working feverishly at sweeping the floor, asked him
    what he was doing and received the reply, "Because I'm working to put a man on the moon"

    Leave a comment:


  • sasguru
    replied
    Originally posted by AtW View Post
    I wish I worked at Twitter, life would have been so much easier.

    HTH
    Not to mention more profitable for you, even if you worked as the janitor.

    Leave a comment:


  • AtW
    replied
    Updating database of 66,555,636,804 rows every 8 hours.

    The largest I've worked with is 3,529,437,390,545 rows.

    I wish I worked at Twitter, life would have been so much easier.

    HTH

    Leave a comment:


  • ChrisPackit
    replied
    Originally posted by ThomasSoerensen View Post

    What is your record?
    My record is 42 rows.

    Hope this helps.

    Leave a comment:


  • NotAllThere
    replied
    There's a load into the Data Warehouse of a few hundred million records every year - fortunately only a handful of fields and it is being summarised. The source, however, never gets trimmed down and must hold several billion records of ~50 fields.

    I've heard the health insurance companies here are already running databases in the Petabyte range.

    Leave a comment:


  • minestrone
    replied
    Originally posted by doodab View Post
    Oracle used something similar for early versions of oracle workflow. Oh how we laughed when the people replacing our "pilot" system decided to model 4000 attributes directly (including such niceties as site_1_address, site_2_address, site_3_address, site_4_address) rather than using the order number as a route into a proper data model. 100,000 orders, 400 million rows (this was on fairly low end sun hardware, our system ran happily on a sun ultra something with 8 disks IIRC) and slightly less than stellar interactive performance. What's that Bob, you need to go back to the drawing board? Do you want to borrow my crayons?
    There was a great drive in the industry against hard coding anything for a few years to the point where objects and tables were completely abstracted away from being what they represent. Stick users, accounts, addresses, transaction into one table of type attribute data columns.

    Nobody had a clue what the software does on these systems, you cannot look at the code and work out what is going to happen because it is all run time based.

    Sticking a break point on setName was never an option when a name attribute got fecked up, you have to stick a break point on setAttribute and drift through countless cycles of the method.

    Leave a comment:


  • PAH
    replied
    The table holding the complete known list of prime numbers must be getting pretty large.

    At least it only requires one column.

    Largest known prime number - Wikipedia, the free encyclopedia

    The record passed one million digits in 1999, earning a $50,000 prize.[4] In 2008 the record passed ten million digits, earning a $100,000 prize.[5] Additional prizes are being offered for the first prime number found with at least one hundred million digits and the first with at least one billion digits.

    I spot a plan B. Anyone done a p2p prime number generator app yet?

    Leave a comment:


  • doodab
    replied
    Originally posted by minestrone View Post
    I worked on a schema which for a major retail bank which consisted of a 3 table model of type, attribute & data. Oracle had to come in when performance somehow unsurprisingly struggled and claimed it was the biggest table size they had ever seen.
    Oracle used something similar for early versions of oracle workflow. Oh how we laughed when the people replacing our "pilot" system decided to model 4000 attributes directly (including such niceties as site_1_address, site_2_address, site_3_address, site_4_address) rather than using the order number as a route into a proper data model. 100,000 orders, 400 million rows (this was on fairly low end sun hardware, our system ran happily on a sun ultra something with 8 disks IIRC) and slightly less than stellar interactive performance. What's that Bob, you need to go back to the drawing board? Do you want to borrow my crayons?

    Leave a comment:


  • minestrone
    replied
    I worked on a schema for a major retail bank which consisted of a 3 table model of type, attribute & data. Oracle had to come in when performance somehow unsurprisingly struggled and claimed it was the biggest table size they had ever seen.
    Last edited by minestrone; 9 September 2011, 11:18.

    Leave a comment:


  • doodab
    replied
    About 4 billion, although it was a materialised view consisting of a 4 way cartesian self join and a couple of other little tables. I did try and warn them it wasn't a good idea.

    Leave a comment:

Working...
X