As a guide for investors, both rankings and ratings have shortcomings

July 10, 1994|By New York Times News Service

Among the myriad pieces of information an investor can sort through to pick a mutual fund, are "ratings" or "rankings" more useful?

Rankings, of course, says A. Michael Lipper, who heads his own mutual fund research company where funds are ranked by performance. In a recent study sent to fund industry executives, he sharply criticizes ratings, epitomized by the system used by his big rival, Morningstar Inc., the research company based in Chicago.

While it is easy to dismiss Lipper's complaint as the bad-mouthing of a competitor, he is highlighting an important debate for the investor and some of his points are well taken, although some are not.

Lipper Analytical Services Inc., of Summit, N.J., produces its own mutual fund reports, which are widely distributed in the industry and often quoted by fund companies in advertisements.

Lipper breaks down the fund universe into many small categories andranks funds over various time periods; it is a rare fund group that cannot find a period in which at least one of its funds did well. Those rankings are published in The Sun on Sundays.

Morningstar, by contrast, rates funds in four broad categories on a scale of one star (poor) to five (excellent), based largely on performance over the last 3, 5 and 10 years. The ratings are adjusted for risk -- month-to-month volatility of performance -- and also are used by funds in advertising.

In mid-June, the Securities and Exchange Commission announced it was looking for a simple measure of a fund's risk that would become an industry standard in advertising.

Mr. Lipper argues that investors "are giving more credibility to ratings than is warranted." He says it is often a bad idea to buy five-star funds because they may later perform poorly. He has a point: No system based on the past will always work in the future.

Morningstar does not dispute that. "People who use the stars as magic bullets are making a mistake," said John Rekenthaler, editor of Morningstar Mutual Funds, who urges using the ratings as one tool among many.

What Morningstar does dispute is the study's distinction between rankings and ratings. Ranking, the study says, is a "nonjudgmental mathematical process"; ratings are "subjective evaluations."

Even when they use mathematical systems, the elements of the rating formula emphasize some fund characteristics over others, the study said.

Mr. Rekenthaler counters that Lipper's ratings are subjective, too, in that decisions about how to group funds influence their standing.

The Lipper study measured the performance of Morningstar's five-star funds at the beginning of each year from 1990 through 1993, and found that more than half of the equity funds underperformed the average equity fund in the subsequent 12 months.

In addition, five-star ratings did not help many investors avoid big losses in 1990, the most recent negative year for equity funds. Thirty-two percent of five-star funds lost 10 percent or more that year, while only 27 percent of all equity funds had a loss that size, the study said.

Further, Mr. Lipper asserts that a five-star strategy puts investors "in the wrong fund at the wrong time." At the end of 1992, when overseas funds were poised to take off, none had five stars, he pointed out.

Mr. Lipper can argue that using smaller categories avoids that problem, though its focus on total return encourages a similar misstep.

Both Lipper and Morningstar can fall into a different trap. If one fund takes on extra risk and wins, it will excel. But the risk-taking can also backfire without warning.

For example, Managers Intermediate Mortgage Fund, a Morningstar five-star fund, was also Lipper's top-ranked United States mortgage fund for the five years ended in 1993. Neither system prepared investors for the fund's losing 22.3 percent in 1994 through June. The derivatives that accelerated its performance while interest rates fell damaged it when rates rose.

Ken Gregory, editor of the No-Load Fund Analyst in San Francisco, faults the Lipper study's methodology, saying the 12 months studied are "too short a time period."

And Charles Trzcinka, associate professor of finance at the State University of New York at Buffalo, says Lipper "is setting a standard that no fund rating system can reach."

Morningstar agrees that many five-star funds come from sectors that have performed well recently and may suffer some short-term underperformance. But it points out that it does not advise a strategy of rotating into those funds.

That's not to say Morningstar's ratings are perfect. "There are problems with any sort of rating system that tries to cover all funds," Mr. Gregory said.

For example, apples-to-apples comparisons are difficult because Morningstar's ratings are based on only four categories: equity funds, taxable bond funds, hybrids of those two and municipal bond funds.

That was done to avoid creating many top-ranked funds in many small groups, Mr. Rekenthaler said. Indeed, the National Association of Securities Dealers now forces funds to disclose the size of a group in advertisements using rankings.

Baltimore Sun Articles
|
|
|
Please note the green-lined linked article text has been applied commercially without any involvement from our newsroom editors, reporters or any other editorial staff.