Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!
You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:
You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.
Can I ask then....if you're multiplying 2 numbers...
why is it O(n^2) and not O(n*m)?
Does it not matter that the 2 numbers which we're multiplying are not the same? Does n simply mean "input"?
because the product of two numbers is at most the square of the largest. So as the O notation represents an upper bound you may as well use the square (that and the fact that you then express it in terms of one "variable").
BTW, you can multiply two numbers of size n (upper bound on both) in O(n.log(n)) by using a fast Fourier transform
It makes it easy to spot in an instant how long a function or algorithm would take to run for a given size of input.
Like, what is ^ ?? I've never seen that symbol used before, so I dont get how it relates to the multiplication answer. I really dont understand O(n^2) at all
As others have said, ^ means raised to a power (exponentiation), it'll be a useful little button to find on your calculator Linear O(n^1) [the '1' being unnecessary], quadratic is O(n^2) etc. Quadratic functions quickly become slower to run than linear ones as the size of input grows, because the number of operations grows as the square of n. So if n were 10 say, a O(n) function would take 10 steps to complete, and a quadratic one 100.
So imagine you are at this do. It takes ten minutes to get to the bar and back forra round. So yous have to calcumerlate the optimum time taken for the dregs because yous dont want to be sitting there dry, do you?
If yous get to the bottom of the glass, it's 'oh ffs'. If everyone gets there is 'OH ffs' - thats the big O.
Next week - Algo-Rythm and Computational-Complexity(checking yer change)
An easy way to understand what O(n), O(n^2) mean is to imagine you are doing something with an array of n elements.
If you simply want to add all the values in an array, you need to look at each one once. If you double the number of elements you double the amount of items to look at.
If you want to sum elements in an NxN 2D array, you have to look at n^2 elements. If you double n, the number of elements is increased by 4.
It gets more complicated when we get into O(log(n)) but simple examples explain the point (I hope).
In designing an algorithm, knowing the what O it has is important... if you are testing on a small dataset of 100 elements, and then in real life run it on 10000 elements, O(n) will be 100 times slower... but O(n^2) will run 10,000 times slower.
Writing f(x) = O(log(x)) for example just means that for sufficiently large x, |f(x)| is less than C.log(x) for some constant C.
Likewise, if f(x) is a polynomial, say x^n + .. + x + 5 then f(x) = O(x^n), because for sufficiently large x the leading term (with the highest power) dominates all the others however large their coefficients, and this argument can easily be formalized:
I have grasped the theory of it - what it's for, and why we would use it etc.
And I get that for all arithmetic operations, the size of the input will be the number of digits in the input numbers. That makes sense to me.
What I struggle with is the mathsy stuff.
Like, what is ^ ?? I've never seen that symbol used before, so I dont get how it relates to the multiplication answer.
I really dont understand O(n^2) at all
I will award you with one free "SallyAnne will concede in an argument" token if you can help me any further
IIRC the ^ operator is the text version of "to the power of" so 2^2 = 4, 2^3 = 8 etc cos you can't do it with the little number to the top right of the large one in text.
Addition would be O(n) because the the problem only gets slower in direct proportion to the number of bits/bytes being added, and multiplication could be as high as O(n^2). The latter (quadratic time) not being too bad, at least compared to exponential functions, but not as fast as linear or constant time algorithms.
Thanks Timber....
I have grasped the theory of it - what it's for, and why we would use it etc.
And I get that for all arithmetic operations, the size of the input will be the number of digits in the input numbers. That makes sense to me.
What I struggle with is the mathsy stuff.
Like, what is ^ ?? I've never seen that symbol used before, so I dont get how it relates to the multiplication answer.
I really dont understand O(n^2) at all
I will award you with one free "SallyAnne will concede in an argument" token if you can help me any further
Writing f(x) = O(log(x)) for example just means that for sufficiently large x, |f(x)| is less than C.log(x) for some constant C.
Likewise, if f(x) is a polynomial, say x^n + .. + x + 5 then f(x) = O(x^n), because for sufficiently large x the leading term (with the highest power) dominates all the others however large their coefficients, and this argument can easily be formalized:
Leave a comment: