Well, let's get this clear, there are three types of notation (5 to be precise) - Big O ( tight upper bound ), small O ( loose upper bound ), theta , Big omega ( tight lower bound ), small omega ( loose bound ). I have forgotten the details coz I read them an year back, so let's get to the useful part of them.
Now a bit of basics, O(n) means, your algorithm is of linear order, ie: If you increase your input size linearly, the time, taken by your algo shall increase linearly too. Eg: array length: 100 - time: 10sec, array length: 200 - time: 20sec and so on. Had it been O(n^2), then for linear increase in input size, the increase in time shall be square of this rate. I hope you can see how things work.
Big O notation is used to give the upper bound of any algorithm, which means, if your algorithm is of the order O(n^2) which means for every possible input, there is no way, your algorithm can do worse than that.
Big omega is just the opposite, the algo cannot do better than the time mentioned under omega notation.
Theta will be the average case, hence, sometimes your algo may perform well, sometimes it may not, but on an average scale, it will perform in the order mentioned under theta notation. A good example of this is quick sort. Pick the pivots in the wrong order and your will end up with a quadratic time complexity for an algorithm that can or is supposed to work in logarithmic time complexity under ideal conditions.
You can find the details in CLRS book if you are interested. If you need the book, comment your email ID.
Best of luck!!