Neither rankings nor ratings can predict the ultimate success of a fund

MUTUAL FUNDS

May 29, 1994|By WERNER RENBERG | WERNER RENBERG,1994 Werner Renberg

Do you rely on equity and bond fund ratings to make decisions when you want to invest in them?

Do you understand where ratings and rankings come from?

Do ratings really help you to make good decisions?

Posed by Dudley H. Ladd, the Scudder, Stevens & Clark executive in charge of Scudder Funds, such questions about investor behavior, knowledge and opinion, were the focus of a discussion at the recent annual meeting of the Investment Company Institute, the industry's principal trade association.

Touching on the many data that mutual fund investors such as you now have available and the ways in which investors and fund marketers apparently use them, participants seemed to agree on some key points:

* Investors rely heavily on ratings and rankings, but no one knows how many investors believe they have predictive values.

* Too few apparently make a sufficient effort to understand how they are computed and, therefore, what they really mean.

* Investment decisions based exclusively on top ratings or rankings don't always lead to the results that are expected.

* Some marketers -- firms that sponsor funds and salespeople that sell them -- use ratings or rankings in ways that may be misleading.

Ladd, who moderated the discussion, began by noting conflicts between advocates of rankings and of ratings.

Ranking systems are epitomized by Lipper Analytical Services, the New York firm that calculates the performance of about 5,000 equity and bond mutual funds, classified according to its analysis of their investment objectives, and ranks them for various periods of time.

Rating systems are epitomized by Morningstar, the Chicago firm which (in addition to calculating performance data for about 3,700 funds) adjusts performance data for about 2,300 for their levels of riskiness, as defined by Morningstar, and assigns them from one to five stars for "risk-adjusted" performance. Five stars, symbolizing the top rating, are assigned to about 200 funds.

"Rankers see themselves as the purists," Ladd said, without revealing which school of thought he belongs to.

"They regard a ranking as akin to a droplet of holy water from a pool of universal truth -- that is, the performance records of the individual mutual funds, enshrined in the (Securities and Exchange Commission) and blessed by certified public accountants.

"They would say that ratings are mere opinions, hitched to the puny little burros of human subjectivity.

"Raters, on the other hand . . . believe rankings have no intrinsic value . . . are useless unless someone turns them into usable information by imbuing them with interpretation."

To illustrate how much investors apparently rely on rankings and ratings when investing in funds, Ladd cited a couple of studies.

One, by Strategic Insight, an industry research and consulting -- firm, found that "a startling 45 percent" of all money invested last year in directly marketed no-load or low-load funds went into funds given five stars by Morningstar at the end of 1993. Another 26 percent went into four-star funds, 7 percent into the majority of funds given one to three stars, and the remaining 22 percent went into funds -- mainly international -- too new to be rated.

A similar study by Harvard Business School professor Jay Light, which compared cash flows with funds' Lipper decile ratings from 1970 to 1992, produced a graph "that looks exactly like a hockey stick." Funds ranked in the lowest eight deciles got virtually no cash, the next 10 percent got some, and the top 10 percent got the most.

While raters and rankers may explain to users how they calculate their data, discussants wondered whether investors regard stars and numbers as recommendations or whether they grasp their significance and go further to understand the funds they're considering.

Moreover, they noted that some investors may have been misled by promotional claims that use ratings without adequate explanation or emphasize rankings for periods that make funds look good.

Do top rankings and ratings -- which, after all, are based on past performance -- really help investors to find funds that will do well in the future? Do they have predictive value -- something that Lipper and Morningstar, for example, do not claim?

Having studied a sampling of 300 funds in Lipper's growth funds category, Ladd said, he found that more than one-third of those ranked in the top decile in 1990 had fallen to the bottom half the next year. After three years, nearly two-thirds had done so. Only 10 percent had stayed in the top decile.

All of which reinforces the conviction that there are no short cuts to successful investing in mutual funds. Ratings, as Morningstar publisher Don Phillips put it, are "a first stage screen." They are a tool for reducing a mass of funds to a smaller number for further study to see what makes them tick and which may be right for you.

Baltimore Sun Articles
|
|
|
Please note the green-lined linked article text has been applied commercially without any involvement from our newsroom editors, reporters or any other editorial staff.