Progress in International Reading and Literacy Study 2006
Socio-Economic/Monitoring Survey [hh/sems]
The PIRLS 2006 aimed to generate a database of student achievement data in addition to information on student, parent, teacher, and school background data for the 47 areas that participated in PIRLS 2006.
Kind of Data
Sample survey data
Unit of Analysis
Individuals and institutions
DataFirst downloaded a version of the PIRLS data (as prepared by IEA) on the 31st of August 2015. This dataset was originally made available as 329 separate datafiles that were defined by country and datafile type (Student Achievement File, Student Background File, Teacher Background File, Home Background File, School Background File, Student-Teacher Linkage File, and PIRLS Within-country Scoring Reliability File). That is, 47 areas (40 countries, including Belgium with two educational systems and Canada with five provinces) and 7 separate datafile types (47 multiplied by 7 yields 329). All datafiles of the same type were combined to yield seven separate datafiles. This is the first version of such a dataset hosted by DataFirst.
A new approach to scaling the reading purposes and processes was introduced in PIRLS 2011 to enhance measurement of trends over time in these domains. This same approach was applied retrospectively to the older versions of PIRLS 2001 and 2006 reading purposes and processes so that these data corresponded to the trend results presented in PIRLS 2011 International Results in Reading. All data files were updated to reflect that change. Please note that the overall reading achievement scores for PIRLS 2006 remain unchanged.
The PIRLS 2006 contains information on the following:
• Student achievement (in PIRLS designed test)
• Teacher background
• Student background
• School background
• Parent background
The survey had international coverage
PIRLS is a study of student achievement in reading comprehension in primary school, and is targeted at the grade level in which students are at the transition from learning to read to reading to learn, which is the fourth grade in most countries. The formal definition of the PIRLS target population makes use of UNESCO's International Standard Classification of Education (ISCED) in identifying the appropriate target grade:
"…all students enrolled in the grade that represents four years of schooling, counting from the first year of ISCED Level 1, providing the mean age at the time of testing is at least 9.5 years. For most countries, the target grade should be the fourth grade, or its national equivalent."
ISCED Level 1 corresponds to primary education or the first stage of basic education, and should mark the beginning of "systematic apprenticeship of reading, writing, and mathematics" (UNESCO, 1999). By the fourth year of Level 1, students have had 4 years of formal instruction in reading, and are in the process of becoming independent readers. In IEA studies, the above definition corresponds to what is known as the international desired target population. Each participating country was expected to define its national desired population to correspond as closely as possible to this definition (i.e., its fourth grade of primary school). In order to measure trends, it was critical that countries that participated in PIRLS 2001, the previous cycle of PIRLS, choose the same target grade for PIRLS 2006 that was used in PIRLS 2001. Information about the target grade in each country is provided in Chapter 9 of the PIRLS 2006 Technical Report.
Although countries were expected to include all students in the target grade in their definition of the population, sometimes it was not possible to include all students who fell under the definition of the international desired target population. Consequently, occasionally a country's national desired target population excluded some section of the population, based on geographic or linguistic constraints. For example, Lithuania's national desired target population included only students in Lithuanian-speaking schools, representing approximately 93 percent of the international desired population of students in the country. PIRLS participants were expected to ensure that the national defined population included at least 95 percent of the national desired population of students. Exclusions (which had to be kept to a minimum) could occur at the school level, within the sampled schools, or both. Although countries were expected to do everything possible to maximize coverage of the national desired population, school-level exclusions sometimes were necessary. Keeping within the 95 percent limit, school-level exclusions could include schools that:
• were geographically remote,
• had very few students,
• had a curriculum or structure diff erent from the mainstream education system, or
• were specifically for students with special needs.
The difference between these school-level exclusions and those at the previous level is that these schools were included as part of the sampling frame (i.e., the list of schools to be sampled). Th ey then were eliminated on an individual basis if it was not feasible to include them in the testing.
In many education systems, students with special educational needs are included in ordinary classes. Due to this fact, another level of exclusions is necessary to reach an eff ective target population-the population of students who ultimately will be tested. These are called within-school exclusions and pertain to students who are unable to be tested for a particular reason but are part of a regular classroom. There are three types of within-school exclusions.
• Intellectually disabled students
• Functionally disabled students
• Non-native language speakers
Students eligible for within-school exclusion were identified by staff at the schools and could still be administered the test if the school did not want the student to feel out of place during the assessment (though the data from these students were not included in any analyses). Again, it was important to ensure that this population was as close to the national desired target population as possible. If combined, school-level and within-school exclusions exceeded 5 percent of the national desired target population, results were annotated in the PIRLS 2006 International Report (Mullis, Martin, Kennedy, & Foy, 2007). Target population coverage and exclusion rates are displayed for each country in Chapter 9 of the PIRLS 2006 Technical Report. Descriptions of the countries' school-level and within-school exclusions can be found in Appendix B of the PIRLS 2006 Technical Report.
Producers and sponsors
International Association for Educational Attainment
International Study Centre
National Centre for Education Statistics of the U.S. Department of Education
The World Bank
The basic sample design used in PIRLS 2006 is known as a two-stage stratif ed cluster design, with the first stage consisting of a sample of schools, and the second stage consisting of a sample of intact classrooms from the target grade in the sampled schools. While all participants adopted this basic two-stage design, four countries, with approval from the PIRLS sampling consultants, added an extra sampling stage. The Russian Federation and the United States introduced a preliminary sampling stage, (first sampling regions in the case of the Russian Federation and primary sampling units consisting of metropolitan areas and counties in the case of the United States). Morocco and Singapore also added a third sampling stage; in these cases sub-sampling students within classrooms rather than selecting intact classes.
For countries participating in PIRLS 2006, school stratification was used to enhance the precision of the survey results. Many participants employed explicit stratification, where the complete school sampling frame was divided into smaller sampling frames according to some criterion, such as region, to ensurea predetermined number of schools sampled for each stratum. For example, Austria divided its sampling frame into nine regions to ensure proportionalrepresentation by region (see Appendix B for stratification information for eachcountry). Stratification also could be done implicitly, a procedure by which schools in a sampling frame were sorted according to a set of stratification variables prior to sampling. For example, Austria employed implicit stratification by district and school size within each regional stratum. Regardless of the other stratification variables used, all countriesused implicit stratification by a measure of size (MOS) of the school.
All countries used a systematic (random start, fixed interval) probabilityproportional-to-size (PPS) sampling approach to sample schools. Note that when this method is combined with an implicit stratification procedure, the allocation of schools in the sample is proportional to the size of the implicit strata. Within the sampled schools, classes were sampled using a systematic random method in all countries except Morocco and Singapore, where classes were sampled with probability proportional to size, and students within classes sampled with equal probability. The PIRLS 2006 sample designs were implemented in an acceptable manner by all participants.
Deviations from the Sample Design
8 National Research Coordinators (NRCs) encountered organizational constraints in their systems that necessitated deviations from the sample design. In each case, the Statistics Canada sampling expert was consulted to ensure that the altered design remained compatible with the PIRLS standards.
These country specific deviations from sample design are detailed in Appendix B of the PIRLS 2006 Technical Report (page 231).
Ideally, response rates to study samples should always be 100 percent, and although the PIRLS 2006 participants worked hard to achieve this goal, it was anticipated that a 100 percent participation rate would not be possible in all countries. To avoid sample size losses, the PIRLS sampling plan identified, a priori, replacement schools for each sampled school. Therefore, if an originally selected school refused to participate in the study, it was possible to replace it with a school that already was identified prior to school sampling. Each originally selected school had up to two pre-assigned replacement schools. In general, the school immediately following the originally selected school on the ordered sampling frame and the one immediately preceding it were designated as replacement schools. Replacement schools always belonged to the same explicit stratum, although they could come from different implicit strata if the originally selected school was either the first or last school of an implicit stratum.
For a full table of school participation rates please see Exhibit 9.5 on page 126 of the PIRLS 2006 Technical Report.
Dates of Data Collection
Data Collection Mode
Data Collection Notes
Each country was responsible for carrying out all aspects of the data collection, using standardized procedures developed for the study. Manuals provided explicit instructions to the NRCs and their staff members on all aspects of the data collection – from contacting sampled schools to packing and shipping materials to the IEA Data Processing Center for processing and verification. Manuals were also prepared for test administrators and for individuals in the sampled schools who work with the national centers to arrange for the data collection within the schools. These manuals addressed all aspects of the assessment administration within schools (including test security, distribution of booklets, timing and conduct of the testing session, and returning materials to the national center).
The PIRLS International Study Center placed great emphasis on monitoring the quality of the PIRLS data collection. In particular, the International Study Center implemented an international program of site visits, whereby international quality control monitors visited a sample of 15 schools in each country and observed the test administration. In addition to the international program, NRCs were also expected to organize an independent national quality control program based upon the international model. The latter program required national quality control observers to document data collection activities in their country. The national quality control observers visited a random sample of 10 percent of the schools (in addition to those visited by the international quality control monitors), and recorded their observations from the testing sessions for later analysis.
PIRLS Background Questionnaires
By gathering information about children’s experiences together with reading achievement on the PIRLS test, it is possible to identify the factors or combinations of factors that relate to high reading literacy. An important part of the PIRLS design is a set of questionnaires targeting factors related to reading literacy. PIRLS administered four questionnaires: to the tested students, to their parents, to their reading teachers, and to their school principals.
Each student taking the PIRLS reading assessment completes the student questionnaire. The questionnaire asks about aspects of students’ home and school experiences – including instructional experiences and reading for homework, selfperceptions and attitudes towards reading, out-of-school reading habits, computer use, home literacy resources, and basic demographic information.
Learning to Read (Home) Survey
The learning to read survey is completed by the parents or primary caregivers of each student taking the PIRLS reading assessment. It addresses child-parent literacy interactions, home literacy resources, parents’ reading habits and attitudes, homeschool connections, and basic demographic and socioeconomic indicators.
The reading teacher of each fourth-grade class sampled for PIRLS completes a questionnaire designed to gather information about classroom contexts for developing reading literacy. This questionnaire asks teachers about characteristics of the class tested (such as size, reading levels of the students, and the language abilities of the students). It also asks about instructional time, materials and activities for teaching reading and promoting the development of their students’ reading literacy, and the grouping of students for reading instruction. Questions about classroom resources, assessment practices, and home-school connections also are included. The questionnaire also asks teachers for their views on opportunities for professional development and collaboration with other teachers, and for information about their education and training.
The principal of each school sampled for PIRLS responds to the school questionnaire. It asks school principals about enrollment and school characteristics (such as where the school is located, resources available in the surrounding area, and indicators of the socioeconomic background of the student body), characteristics of reading education in the school, instructional time, school resources (such as the availability of instructional materials and staff), home-school connections, and the school climate.
To ensure the availability of comparable, high-quality data for analysis, PIRLS took rigorous quality control steps to create the international database. Countries used manuals and software provided by PIRLS to create and check their data files, so that the information would be in a standardized international format before being forwarded to the IEA Data Processing Center. Upon arrival at the DPC, the data underwent an exhaustive cleaning process involving several steps and procedures designed to identify, document, and correct deviations from the international instruments, file structures, and coding schemes. The process also emphasized consistency of information within national data sets, and appropriate linking among the student, parent, teacher, and school data files.
Public use files, available to all
International Association for the Evaluation of Educational Achievement (IEA). Progress in International Reading and Literacy Study 2006 Version 1.1 [dataset]. Chestnut Hill, MA: PIRLS International Study Centre [producer], 2008. Cape Town: DataFirst [distributor], 2015. DOI: https://doi.org/10.25828/1avc-2f74