• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

oh dear: Panic selling shuts £2bn fund

Collapse
X
  •  
  • Filter
  • Time
  • Show
Clear All
new posts

    #11
    ho ho ho...property funds......

    milan, thats my attitude towards my investments and pension, i'll rather lose the money myself then have someone else lose the money for me!!

    Comment


      #12
      yep wantacontract,

      funny thing is with the property fund, I guess they marketed the fund as being just like holding property and now for the poor investors in the fund it is just like holding property and they can't get out

      Milan.

      Comment


        #13
        Originally posted by sasguru View Post
        atw just shut it. You're embarassing yourself again. Isn't there some hashing algorithm you can perfect or something, there's a good boy.
        I moved away from hashing now. I used it initially and improved it considerably, but the volume of data forced me to think of a much more effective approach that I have successfully implemented about 24 months ago. It is the kind of secret sauce that allows me to handle terabytes of data quickly on small scale hardware processing it all in parallel. Here is the new term for you sas: "multiway in-place merging", not that I expect you to understand it, but then again you don't really know how hashing works either.

        Comment


          #14
          Originally posted by AtW View Post
          I moved away from hashing now. I used it initially and improved it considerably, but the volume of data forced me to think of a much more effective approach that I have successfully implemented about 24 months ago. It is the kind of secret sauce that allows me to handle terabytes of data quickly on small scale hardware processing it all in parallel. Here is the new term for you sas: "multiway in-place merging", not that I expect you to understand it, but then again you don't really know how hashing works either.
          You're right. I'm sure it wouldn't take me more than a few minutes rather than the years it took you, though. And why would I be interested anyway - I could hire some Indian programmer to do it for a few pounds.
          Hard Brexit now!
          #prayfornodeal

          Comment


            #15
            Originally posted by sasguru View Post
            You're right. I'm sure it wouldn't take me more than a few minutes rather than the years it took you, though. And why would I be interested anyway - I could hire some Indian programmer to do it for a few pounds.
            Now there's an idea. We have a whip and stick a bid up on rentacoder for a few hundred quid for someone to build something similar to AtW's SKA before he finishes it.
            ǝןqqıʍ

            Comment


              #16
              Originally posted by sasguru View Post
              I could hire some Indian programmer to do it for a few pounds.
              Problem is chappy, your indian programmer would not know what to do unless you explain him in great detail, and you don't really know what "multiway in-place merging" is, do you? This topic is actually very poorly described on the Internet and books, it took a moment of brilliance for me to actually get it done.

              Let me just explain to you chappy - one of my SKA subsystems just finished deduplicating around 300 bln strings (long urls), with mapping relationships (one pointing to the other) between them at around 930 bln. I'd like to see rent-a-coder do it cheaper and faster than me - if they do (with reasonable hardware usage) I might as well admit you are right

              OR:

              Take your favourite database and create this table:

              create table UrlMap
              (
              TargetURL varchar(255),
              SourceURL varchar(255),
              Flags int
              )

              Then load into this table 1 bln (that's 1,000,000,000) of rows, with average URL length of 60 bytes - take a note how long it takes. Then create clustered index on TargetURL - you will be querying on that key. Take a note how long index building takes. Then run a few queries when you are done just to see how long it takes. Then take a note on cost of database and harware you will need to handle at least 10 searches per second.

              Then consider that my SKA handled almost 1000 times more in around 8 days on one server
              Last edited by AtW; 18 January 2008, 11:54.

              Comment


                #17
                Alexey,

                we all love your SKA, and want you to succeed,

                isn't about time you spent some of your hard earned on the services of a PR Company to do some shouting and get your product out there into the world ?

                It's a wonder what a bit of PR and Marketing can achieve.

                Milan.

                Comment


                  #18
                  Almost at that point old chap, almost.

                  Comment


                    #19
                    very good

                    Milan.

                    Comment


                      #20
                      Originally posted by AtW View Post
                      Problem is chappy, your indian programmer would not know what to do unless you explain him in great detail, and you don't really know what "multiway in-place merging" is, do you? This topic is actually very poorly described on the Internet and books, it took a moment of brilliance for me to actually get it done.

                      Let me just explain to you chappy - one of my SKA subsystems just finished deduplicating around 300 bln strings (long urls), with mapping relationships (one pointing to the other) between them at around 930 bln. I'd like to see rent-a-coder do it cheaper and faster than me - if they do (with reasonable hardware usage) I might as well admit you are right

                      OR:

                      Take your favourite database and create this table:

                      create table UrlMap
                      (
                      TargetURL varchar(255),
                      SourceURL varchar(255),
                      Flags int
                      )

                      Then load into this table 1 bln (that's 1,000,000,000) of rows, with average URL length of 60 bytes - take a note how long it takes. Then create clustered index on TargetURL - you will be querying on that key. Take a note how long index building takes. Then run a few queries when you are done just to see how long it takes. Then take a note on cost of database and harware you will need to handle at least 10 searches per second.

                      Then consider that my SKA handled almost 1000 times more in around 8 days on one server
                      Forget SKA, I'm going to market this post as cure to the insomniacs of the world. You should get the Nobel prize for services to the medical industry atw.
                      Hard Brexit now!
                      #prayfornodeal

                      Comment

                      Working...
                      X