Much of the lingo in our business, as in many businesses, falls somewhere between tough to understand and utterly impenetrable. Like Lewis Carroll's

Humpty Dumpty

in

Through the Looking Glass

-- "'When I use a word,' Humpty Dumpty said in rather a scornful tone, 'it means just what I choose it to mean -- neither more nor less.'" -- we tend to use language to mean what

we

want it to mean, not what most others think it means.

A constant thread in my mail from

TheStreet.com

subscribers asks what one or another of these curious terms really means. Because I work, in effect, at the intersection of technology and investing, I get a double dose: odd terms from both areas.

A steady stream of those inquiries involves the word

scalability

. Nearly everyone knows the meaning of scalability in the most common, real-world terms; but it has special, and different, meanings in the tech world and in the business world. So let's tackle both.

I call these variations

engineering scalability

and

business-model scalability

. Both are important.

Engineering scalability is often used in computer hardware and software design and specs.

Sun Microsystems

(SUNW) - Get Report

, for example, often uses the term scalability to describe the expandability of its high-priced, high-power servers. The idea is that once you buy one of these machines, the capacity of the base machine -- the number of concurrent users it can handle -- can be increased by adding additional processors, additional memory, and additional internal or external disk drives.

Sooner or later, yes, you do have to buy a second Sun server --and then a 10th and a 50th and a 100th, if you're so lucky that your business grows that much -- but the underlying promise is that you'll get relatively more mileage out of that first machine ... and that when you do buy more, all will work together smoothly.

Somewhat oversimplified, that's the computer-hardware definition for scalability.

Software scalability is similar. Network operating systems, especially, should be able to handle more and more simultaneous users, without hitting the wall, without performance slowing to a crawl. To use Sun as an example again, its Solaris flavor of Unix has a great reputation as a highly scalable operating system: You can keep piling on more users.

Again, performance eventually does drop, and you have to add more hardware under the operating system. Then, Solaris promises, all your connected, load-sharing servers, running under Solaris, will work smoothly together, as you pile on more and more servers.

One of the big knocks on

Microsoft's

(MSFT) - Get Report

Windows NT had been that it wasn't very scalable. No matter the power and quantity of the underlying hardware, it hit the wall too soon as you added users. Unix in general, Solaris in particular, and most versions of Linux all were perceived as more scalable. Bad news for Microsoft.

So when Microsoft rolled out Windows 2000, the successor to Windows NT, this spring, it made sure it was ready to deal with questions about scalability. It could turn to extensive third-party tests to show --

surprise!

-- that when running on very high-end Compaq servers, Windows 2000's most advanced versions were, in fact,

highly

scalable ... even more scalable --

surprise again!

-- than the competition's.

So scalability turns out to be a big issue in the computer world. Claims and proofs of scalability -- this kind of

engineering scalability

-- are key to big sales and to building the ongoing reputation of a company and its products.

Over here on the investment side of the street, what I call "business-model scalability," has a completely different, though clearly related, meaning. In the business world, and especially in Web businesses, managers speak of the scalability of their revenue models, by which they mean that after reaching a certain, predictable critical mass in terms of capital investment and number of users signed up, they can then serve many more customers with almost no additional costs.

Say you're running an information Web site. Once you've built your technology infrastructure, built your editorial staff, have your marketing programs in place, and have hit some critical mass of subscribers, then what's the incremental cost of adding another customer? A thousand more customers? Ten thousand more customers?

Essentially, if not quite, zero.

In other words, you start using the power of capitalism -- using money to make more money -- to kick into what people like to call the hockey-stick-shaped curve not only in revenue, but also profits.

Because labor represents, for most businesses, about 70% of their cost structures, an approach that allows additional growth past some break-even point without adding additional staff -- or by adding fewer staff, per X-thousand users, than originally required -- can be spectacularly profitable.

Later today: Did the Web give new meaning to scalability, or is it some New Economy smoke and mirrors?

Jim Seymour is president of Seymour Group, an information-strategies consulting firm working with corporate clients in the U.S., Europe and Asia, and a longtime columnist for PC Magazine. Under no circumstances does the information in this column represent a recommendation to buy or sell stocks. At time of publication, neither Seymour nor Seymour Group held positions in any securities mentioned in this column, although holdings can change at any time. Seymour does not write about companies that are current or recent consulting clients of Seymour Group. While Seymour cannot provide investment advice or recommendations, he invites your feedback at

jseymour@thestreet.com.