Notes and Selected Themes and Questions from the Northwestern Meeting.

Comprehensive notes from the 10/21 - 10/22 meetings (downloadable PDF):

Selected themes and questions that emerged from the discussions (text and downloadable pdf):

Key Issues and Points – Selected List (from J. Lemke)

Northwestern Meeting, MacArthur Documenting & Assessing Learning Project

In order to do assessment across settings, cumulative or comparative, we need to identify what that matters remains the same across the settings. Many common assumptions about this are flawed. E.g. identities change across settings, modes of learning do, purposes do, so do what is valued from the learning activities.

Communities are not bounded, fixed entities, but abstracted from flows of practices among participants in many communities. Learning cannot be defined as progress toward mastery in a community given this fact.

Assessment needs to be based on longitudinal, ethnographic records, e.g. collections of material objects and semiotic products with in-­‐progress versions, over time.

Affective engagement may be one index that is common across settings and so important for wider assessment. Includes commitment, persistence.

The STEAM approach offers additional grounds for assessment, e.g. in the design of artifacts and re-­‐mix products.

It is not just individuals that learn, but systems. We need to document the ways in which a system of learning tools and learners improves its effectiveness by various criteria. Individual learning may be functionally definable only in relation to the wider learning of the ecosystem.
Changes in social networks and the distribution of practices across networks can be evidence of system-­‐level learning (including individual learning).

Badges are useful as a community-­‐internal tool to recognize accomplishment, but are dangerous potential tools of social control if they are publicly collected and endorsed by large-­‐ scale authorities.

Badges may represent one instance of a valuable concept: crowd-­‐sourcing of assessment, within a community of expertise. In order to assess individuals in depth, or small groups, project groups, etc., you need the power of large numbers of assessors who share some basic values and expertise.

The evidence of learning at level N in a game-­‐like model is the player’s performance at level N+1.

Methods are needed to insure that when an important learning event is recognized retrospectively, we can capture it and what immediately preceded it (cf. “TIVO” camera method).

What methods of assessment are most useful for the purpose of improving the learning environment (system, tools) itself?

What quantitative measures in a local setting are most useful as data for assessment across settings, projects?