What is logn?

The short version is that it's a geeky computer science term that also happened to be short enough that I liked it as a domain name. Despite what others may tell you, it's not a misspelling of "login". Also, I tend to pronounce it as "log" followed by the letter "n".

The long version is that it's the term used to describe a class of highly efficient algorithms. If you'll indulge me in a short lesson...

Time Complexity

In computer science, algorithms are often compared by the amount of time or memory that they consume to perform a task. In order to make these comparisons fair and easy to understand, we analyze them based on the relationship between the size of the task and the number of steps required to complete the task.

For example, suppose you wanted to average a list of numeric values. In order to do so, you'd need to add up all the numbers, and then divide by the number of values you had. The number of steps is directly related to the number of values: if you doubled the number of values to average, you'd expect it to take twice as long. This is an example of linear complexity: as the number of items (abbreviated as n) grows, the time to complete the task grows at the same rate.

You can certainly do much worse than linear time; many tasks require polynomial or even exponential time to complete! Sorting lists of numbers is a good example; you must make multiple passes over the data to sift everything into a final order.

However, there are also some algorithms that are better than linear time. Consider a guessing game where a human thinks of a number between one and 100, and the computer has to guess what it is. If the computer guesses incorrectly, the human tells the computer whether the answer is "higher" or "lower" than the guess. By choosing guesses carefully, this piece of information allows the computer to eliminate half of the remaining numbers with every guess (by always picking a number in the middle of the possible set of numbers).

Because every guess divides the remaining number pool in half, these repeated halvings compound and create an inverse exponent: a logarithm. Effectively, the number of steps to solve the problem grows linearly as the problem space grows exponentially. Thought of another way, doubling the size of the problem results in only a single additional step (compare this to a linear algorithm, where doubling the problem size also doubles the number of steps). We abbreviate this relationship as logn, showing that as n (the problem size) gets larger, the number of steps is only the logarithm of that number.

Thus, logarithmic algorithms are highly efficient and well-regarded. I thought it would be fun to name my domain after them.