How NetBig.com ranks Chinese Universities:
An explanation of the ranking methodology

¡¡

The overall rankings have been made based upon six indices in four categories including academic quality, student quality, teaching resources and research funding. We have used two indices to reflect a university¡¯s academic reputations. These include 1) academic reputation viewed by peers and 2) academic publications, both Chinese and international. A single index is used to reflect ¡°student quality¡±: 3) the average score of the enrolled student in the National College Entrance Examination. Two indices are used to reflect the ¡°teaching resources¡±: 4) the ratio of teachers with academic title of associate professor and higher, and 5) the faculty student ratio. A single index is used to reflect the research funding: 6) the annual university research expenditure. Different index will be assigned a different weight.

Our choices of measures of academic quality have largely been constrained by the availability of data. As we would like to capture the overall picture of Chinese Universities with precision, we choose to use a short list of indices each based upon reliable data. The six measures of academic quality have been made all based on statistics with the only exception of ¡°Academic reputation viewed by peers¡±, where the subjective judgements of the very elite of Chinese academics are used. Our rankings, as it is based on statistics of different universities and expert opinions, should capture objectively the overall picture of the Chinese Universities.

The Chinese universities were initially set up according to both regions and specialties. They were planned to attract students of a regional origin and with fields constrained by their specialties. This set up, however, gets blurred in recent years. The top schools in each region are drawing students from other regions, and top schools with distinct specialties are setting up fields beyond their origin specialties. Now it is no surprise to find a business school or a law school in a university designated for science and technology. Instead of comparing schools within each specialty, to reflect the educational reality, we have pooled the schools in one single ranking.

In the following, we give a detailed illustration on our computation methods and ranking methodology.

Ranking methodology and weights of different items

Academic reputation, weight: 50%. It is based on

Assessment of Academy of Sciences fellows and university presidents, weight 30%

Number of publications as can be found in by SCI, EI, ISTP, weight 20%,

Student selectivity, calculated by using the scores of College Entrance Examination of new enrollment in 1998, weight 30%,

Faculty Resources, weight, 10%, among which,

Ratio of faculty with academic title of associate professor and higher, weight 5%

Faculty / Student ratio, weight 5%

Financial resources, calculated with the total annual research expenditure of a university, weight 10%.

We assign the highest weights to the indices in the category of academic reputation, as they are the only ¡°outcome measure¡± in our set of measures of academic excellence. The second highest weight is given to the student selectivity, as in general, the quality of students a school can attract will be related to the additional value its education can provide. A relatively lower weight is given to the categories of the faculty resources and financial resources. These ¡°input measures¡± reflect more of the opinion of the government more than the general society. We, however, still assign a significant weight to these measures as in China, where government provides the most education funding, opinions of the government matter.

Computational Principles

The score of the above categories weighted properly for a university are summed up to come up with a total score. The score of an individual category is a relative score based on a school¡¯s ranking in that particular category. That is, a score of 100 is assigned to the school with the highest ranking. The score of other schools are relative scores scaled accordingly.

Computation Methods on Assessment of Academy of Sciences fellows and University Presidents

We have asked the fellows and presidents to rate the universities in the scale of 1 to 5, with 5 being the highest rating.

We have provided a list of 40 best known universities in China for our respondents to rate. Any respondent could however include up to 5 more universities to the list in the case that they think a more comprehensive list should be used. By gathering the respondents¡¯ additions, we in total came up with a list of 50 schools. Those schools added by the respondents typically received a higher average score possibly because of the small sample bias. For example, some of added schools received only rating from a single respondent. We therefore set up the following system to adjust this potential bias,

If a school received rating from less than 10 respondents, than the rating is not statistically significant. We choose to ignore the rating in this case, and assume the rating will be the same as the average score of other items, i.e., the number of publications, student selectivity, faculty resources, and financial resources;

If a school received rating from 10-49 respondents, its score is calculated as 0.85* Average Rating;

If a school received rating from 50 or higher respondents, its score is calculated as 1.00*Average Rating.

Score on publications,

The score is arrived based on the total number of publications of the individual school in year 1998 as can be found in SCI, EI, ISTP.

The computation of scores of new enrollment in National College Entrance Exam,

We have used the scores of every new enrollment of every school in the year of 1998.

Computational Method: First, we separate the students as Arts/Social Sciences stream, and Science/ Technology stream. We then calculate the average scores for each stream of a school in a particular province (where the freshman comes from). These average scores of all the universities will be sorted to arrive at a rank for each stream within each province. Second, based on the above provincial rank, weighted by the number of enrollment, we arrive at a national-wide rank for each stream within each university. Last, the overall rank is arrived at based on the above number for the two streams, weighted by the number of enrollment in each stream. This number will then be used as the score for the item ¡°student selectivity¡±. In order to make the score of each university more comparable, we have used the following treatment for the schools with too few student enrollment,

If the student enrollment is less than 50 in any of the streams, the score for this category for a particular school will be discarded,

If the total student enrollment for both streams is less than 150, the score for this category for a particular school will be discarded.

Faculty resources

We have used the number of teachers, the number of teachers with academic title associate professor and higher, and the number of students (adult education and long distance learning not included) to calculate.

Use the ratio of teacher with academic title associate professor and higher to arrive at a score for each university,

The total number of faculty is divided by the total number of students to arrive at the faculty/student ratio,

The score on faculty resources is calculated based on the above two scores.

The faculty resources will not be calculated for schools with a full time faculty less than 150.

Research Expenditure

The research expenditure figure uses the research expenditure of a university¡¯s expenditure in year 1998.

The overall rankings,

All the above figures will be added together carrying its respective weight. The numbers for all schools will be ranked. With the top rank school assigning the score of 100, a relative score will be assigned to every other university. For missing values, we have used 0.