How We Crunched the Numbers

Publish date:

We asked for it, and you delivered. A total of 3,379 reader responses cascaded in for


second online brokers survey. With the decks awash with raw data, our task was to sort through the mound of responses, separating the herring from the mackerels, looking for common patterns to reveal the currents in the ocean of online brokers.

How did we do it? Think 36 hours as a data-crunching chimpanzee for starters, and you have some idea of the project's scope. As for specifics, here they are, as detailed as possible for the data heads among you.

The Survey

The survey (actually created on the tech side by

Informative) was active on


for nearly a week, from 10:30 a.m. EST Tuesday, Oct. 27 until midnight Monday, Nov. 2. Oct. 27 marked exactly a year after Gray Monday and six months since we did our first online brokers

survey, in late April 1998.

The October survey's interface had a familiar feel, and for good reason. We wanted to make valid comparisons of issues from survey to survey. Many questions were identical to the previous questionnaire.

Just like last time, we divided the survey into three sections.

  • Demographic: Who are the TSC readers who trade online?
  • Reader Priorities: What are the most important factors for readers in choosing an online broker?
  • Firm Ratings: How do the brokers stack up in actual field conditions according to the people who use them?

There were some important changes this time, however. First of all, we expanded the survey to include more questions on reader demographics (like the size of the average trade) and detailed brokerage-reliability issues. Next, we improved the mechanics of the survey to return more accurate results to the home office. For example, "cookies" helped prevent users from skewing data by submitting multiple responses. As a double-check, we required readers to include their email addresses, preventing the submission of more than one survey per address.

Finally, we set up the survey so it would


accept improperly filled-out surveys. We know this strict system, as well as other difficulties, undoubtedly frustrated some readers. "This will be my third attempt to submit this survey,"

Mary White

wrote in her survey. "I will not try again." We're sorry about the hassle. But we wanted to ensure that all submissions were accurate and complete, and we think we did. We appreciate your help and patience.

Once we had the raw data in hand, we imported the results into a database program to begin the number crunch.

The Rankings: A Short Course in Stats

Statistics is kind of a magical science. As election-day exit polls have demonstrated, you can gauge the sentiments of a huge population with uncanny accuracy simply by surveying a tiny sample of respondents.

But since we can never be 100% certain of any result unless we question every single living being in the universe, you often see poll results with a plus-or-minus margin for error. The tighter the range, the more accurate the data. The accuracy is based largely on two factors:

  • The sample size: the larger the response, the more accurate the results.
  • The consistency of the data: Response sets that are basically the same are deemed more accurate than ones that are wildly different.

We're highly sensitive to these statistical vagaries and aren't about to call a clear-cut winner where the error range is too great. As a result, we plugged all our data into "whisker plots" like the one here rating customer service. The blobs in the middle of the lines represent the average score we got from our readers, while the vertical lines show the possible score it really might be, given the error range.

The farther along the vertical line you are from center, the lower the odds that the "real" score lies there.

While the top five firms here seem relatively comparable, we can conclude with certainty that


is significantly better than, say,


in customer service, since their ranges don't overlap. In addition,


comes in as the hands-down customer service flop regardless of its relatively large error margin (we're awarding these glaring washouts our special Stinker award whenever they show up in our results).

We were able to rate only eight firms with statistical confidence. These had sample sizes large enough that the vertical error lines were reasonably small. We also had enough data to give some loose results on the second-tier firms (in terms of survey popularity):



Brown & Co.


Web Street Securities






These firms each received more than 1% of the vote, or at least 34 votes, and the responses were generally consistent. So we decided the data were legitimate enough to publish. Still, we can't back these with the same confidence as the big eight. We're calling these our "League-B" brokers, and we'll have a separate piece on their results on Friday.

Overall Scores

From the get-go, we designed our survey to revolve around you, the reader. Unlike other rating schemes concocted in the isolation of some editorial conference room, we didn't presume to know what's most important to investors in choosing an online broker. And since we at


don't own stocks, there's no way for us to know personally which brokers actually back up their flashy ads, hyperbolic claims and glossy literature with hard performance.

So we relied on what you told us were the important factors in choosing an online broker to come up with each broker's score. We asked you to rank our list of factors (e.g. reliability, commissions, tools) in order of importance from 1 to 10. After averaging your surveys, we assigned each factor a percent weighting (using a logarithmic scale, for those who are interested).

In ranking the brokers, we gave the factors you called most important a proportionally heavier weight than the criteria you identified as less important. For example, how the brokers performed in reliability counted toward 18% of their final overall score. In contrast, their score on the news and tools they offered counted toward only 6%.

We did notice some significant differences in category rankings across different demographic groups. For example, small online investors (less than $5,000 for an average trade) value low commissions significantly more than larger investors, while $100,000-plus traders are more concerned about execution price than any other factor. In addition, day traders will kill for speedy order confirmation, whereas long-term investors value tools and news offerings more than their more frenetic trading colleagues do.

None of these variances come as a big surprise. Moreover, even if we broke down the results by type of trader (day trader, active, long-term) or size of average trade, the overall results in terms of the broker rankings stayed essentially the same.

In the end,


seems to have a robust enough blend of the features that investors care about most to make it tops for just about any user in today's online brokerage marketplace. But if your needs depart in any way from the average blend, shop around and embrace our specific criteria rankings to help you make your decision.

The technical side of this online brokers survey was conducted by

, October 1998.