Well, I would like to explain the concepts.
Firstly, big-O of a function or algorithm is a way to determine how an algorithm responds to various inputs and scales. Imagine an algorithm that operates on a list of integers. Now if you change the size of the list i.e. change the number of items, the time taken by the algorithm changes. But the question is how much does it change? big-O answers this question.
Simply put, when we say the time complexity of an algorithm is O(n), it means the performance of the algorithm varies linearly with the input size. So, let's take a look at various complexities.
O(n) :
I talked about it briefly above. In other words if you were to plot the graph between input size and completion time of the algorithm it would be a linear graph.
e.g. Consider the following function
function iterate(list,itemToSearch){
for item in list
if(item == itemToSearch)
return true;
return false;
}
The above function iterates through the array and returns true if the item is found. In other words, for each element in the array our algorithm performs 1 operation. If the input size is n, the number of operations will be n. So, this is an example of time complexity O(n). You can see the number of operation/performance varies linearly with the input size.
You could argue that the number of operations will be just one if the item is the first element in the array. But here is the catch. big-O deals with worst case scenarios and in worst case the item isn't present in the array and your algorithm performs n operations.
O(n^2) :
In this case the performance of the algorithm varies quadratically with the input size. Let's consider the following function :
function combinatorics(list){
for num1 in list
for num2 in list
print "(num1,num2),"
}
For the array [1,2,3], the above function will print the following :
(1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2), (3,3)
Basically our algorithm loops through the outer and inner loops to combine each number with itself and other numbers. This means for each input element in the array we need to do n more operations (inner loop). So, if there are total n elements in the array, the number of operations will be O(n^2).
The time complexity can be O(n^3), O(n^4) etc depending on the number of inner loops present.
O(1)
This is called constant time complexity. No matter how many elements you feed to the algorithm, the performance remains the same. A simple example will be something like this :
function constantTime(list){
return true;
}
O(logn)
In this case the performance of the algorithm varies on a logarithmic scale. Take the example of binary search. Firstly, you choose a pivot element and compare the item with pivot. If the item is equal to the pivot element then you stop. If the item is lesser than pivot you attack the lower half of the array. Otherwise you attack the upper half of the array. You repeat this step until the item is found or the array is no longer dividable. This is a classic case of O(logn) complexity.
If you double the input size it won't have much effect on the performance. But if you increase the size by a factor of 10,100 etc you can see a change in performance.
There are also other time complexities like O(2^n), O(nlogn) etc. But the basic idea is same. You can learn more about them in your algorithm book or just do a simple Google search.
Here is an image that represents curves for various time complexities.

Good Resources :
Let me know if you still have any questions.