Antidisciplinary Project Studio I

Course Information (Tentative)

Course
Antidisciplinary Project Studio I (APS-I)
Institution
School of Design & Science, Chiba Institute of Technology
Term
2026 — April 14 to July 16 (13 weeks)
Meets
Tuesday & Thursday, 13:00–16:00 (Lecture + Tutorial + Studio)
Location
Room 722 (Bldg. 7, 2F)

Instructors

Ira Winder · Joe Austerweil

Supporting Faculty

Hiroki Kojima, Mizuki Oka, Catharina Maracke, Daum Kim

Supporting Staff

Grisha Szep

Guest Lecturers

Colin Rowat, Leonard Lin, Alyssa Adams

Course Description

APS-I is a studio course that explores complex, adaptive systems through the lens of agent-based simulation. Students investigate how intelligent behavior emerges from simple rules — sensing environments, forming beliefs under uncertainty, making decisions, and cooperating (or competing) with other agents. Each phase builds on the last, progressing from individual perception to multi-agent dynamics to questions of digital fairness and culture.

All learning happens through interactive simulations built with p5.js and guided by an AI collaborator. There are no traditional coding assignments; instead, students run experiments, form hypotheses, and document discoveries. AI is not a shortcut — it is a structured part of the pedagogy. Students use an AI coding assistant as a real-time teaching assistant, debugging partner, and analytical collaborator across every module.

Learning Objectives

  1. Model complex environments — understand how agents perceive and represent noisy, dynamic worlds.
  2. Reason under uncertainty — apply Bayesian inference to update beliefs from evidence.
  3. Design adaptive agents — implement explore-vs-exploit strategies and evaluate learning algorithms.
  4. Analyze emergent behavior — observe how cooperation, competition, and complexity arise from simple agent interactions.
  5. Collaborate critically with AI — develop calibrated trust in AI tools; know when to rely on AI and when to verify independently.
  6. Communicate scientific reasoning — document hypotheses, experiments, and findings in clear, evidence-based writing.

AI Integration & Interaction Policy

This course uses an AI coding assistant as the primary AI interface. Every student receives an API token allocation for the term, granting access to the AI for in-class tutorials, mini-project work, and studio sessions.

How Tokens Are Used

Each interaction with the AI consumes API tokens proportional to the length of the conversation. Tokens are spent every time you send a message or the AI responds — longer, more detailed exchanges use more tokens. Your allocation is designed to support all required coursework with room for exploration, but it is not unlimited.

ActivityToken UsageGuidance
Onboarding (T0)LowFamiliarize yourself with the AI assistant; short exploratory conversations.
Tutorials (T0–T3)Low–ModerateGuided walkthroughs; ask the AI to explain concepts and step through code.
Mini-Projects (MP1–MP4)Moderate–HighCore learning happens here. Use the AI for hypothesis testing, debugging, and synthesis.
Studio & Final ProjectVariableOpen-ended; budget tokens toward your most challenging questions.

Best practices for token efficiency: Start with specific, focused questions rather than open-ended prompts. Provide context (paste relevant code or describe the simulation state) so the AI doesn't have to guess. End conversations when you have what you need — don't leave sessions running idle.

How AI Use Is Evaluated

AI use is evaluated through five complementary mechanisms that take advantage of the small class size to prioritize depth over breadth.

1. Session Log Review (Instructor)

Every AI session is automatically logged as a .jsonl file saved to your deliverables/sessions/ folder. Session logs are a required deliverable — they serve as evidence of your learning process, not just your final answers.

With a small cohort, instructors review logs comprehensively rather than by sampling. Logs are read with attention to:

  • Task delegation patterns — What kinds of tasks did you give the AI? Brainstorming and exploration, code generation, debugging, analysis, writing? How did this mix change over the term?
  • Prompt evolution — Early and late session logs are compared. We look for prompts that become more specific, more domain-informed, and more exploratory as the course progresses — but also for moments where a student deliberately chose not to use AI, which can be equally revealing.
  • Critical verification — Did you test the AI's suggestions against actual simulation behavior rather than accepting them blindly? Did you catch errors the AI missed? Did you design experiments to check the AI's explanations?
  • Depth of inquiry — Did you ask follow-up questions and push past surface-level answers, or stop at the first response?
  • Tool selection — Did you make deliberate choices about when to use a CLI-based AI, a web chat interface, or no AI at all? Can you articulate why?

2. Peer Review (Students)

During presentation sessions, students evaluate each other's projects on three dimensions:

  • Interestingness — Does the project ask a compelling question or explore a surprising direction?
  • Value — Does the work produce meaningful insight, a useful result, or a clear demonstration of understanding?
  • Uniqueness — Does the project feel distinctive, or could it have been produced by anyone giving the same prompt to an AI?

With 6 students, every student reviews every project, providing complete rather than partial coverage. Peer review helps surface whether AI-assisted work retains a personal point of view. Projects that use AI ambitiously — to attempt something the student couldn't have done alone — are valued over projects that use AI as a shortcut.

3. Student Reflections (Self-Report)

Three structured reflections are completed at the beginning of the semester, at midterms, and after finals. Rather than a standardized survey designed for statistical analysis, these are brief written reflections (~1 page) addressing:

  • Familiarity & comfort — Your existing experience with AI tools; your comfort level delegating different types of tasks, and how this has changed.
  • Confidence — Your ability to specify what you want from the AI; your trust in the accuracy and usefulness of AI results.
  • Experience — Moments of enjoyment, surprise, frustration, or discomfort while learning with AI. What worked and what didn't.
  • Behavior — What types of investigations you attempted with AI; which succeeded and which failed; whether you would have attempted them without AI.
  • Comparisons — How working with AI compares to working with human peers, domain experts, instructors, and traditional web search.
  • Collaboration dynamics — For collaborative work: how your team divided AI labor, who prompted, and whether AI was a shared or individual resource.

The small class size makes these reflections a genuine source of insight rather than a data collection exercise. Instructors read every reflection and may follow up in conversation.

4. Periodic Check-Ins (Instructor–Student)

At the midpoint of the course (around Week 6) and again before the final project, each student has a brief one-on-one conversation with an instructor about their AI collaboration experience. These are informal and developmental — not evaluative. The goal is to surface patterns that students may not notice in their own session logs: strategies that are working well, habits that may be limiting exploration, and opportunities to push their AI collaboration further in the remaining weeks.

5. AI Interaction Analytics

This course analyzes patterns in how students interact with AI tools — not to measure productivity or penalize usage, but to understand how AI collaboration skills develop over the semester.

Importantly, more AI use does not necessarily indicate better performance, and less does not necessarily indicate worse. A student who achieves a strong outcome in few interactions may have developed excellent specification skills; equally, a student who engages in extensive back-and-forth may be doing the harder work of exploring alternatives and stress-testing outputs. As students internalize practices like forming hypotheses before prompting and verifying AI suggestions against simulation behavior, we expect interaction strategies to become more intentional — which may mean more interaction, not less. The evaluation treats interaction patterns as data requiring interpretation, not as outcomes in themselves.

Interaction patterns by project phase. With four mini-projects and a final project spanning sensing, reasoning, emergence, and fairness, the course offers a natural progression in complexity. Interaction patterns are examined across these phases to understand how students' AI collaboration strategies evolve as they encounter increasingly open-ended problems.

Prompt craft over time. Early and late session logs are compared — not for a simple "better or worse" judgment, but to trace how students develop more intentional interaction strategies. The question is whether students learn to decompose complex simulation questions into targeted AI queries, provide richer context, and know when more iteration is warranted versus when to step back and think independently.

Interaction depth and project quality. The number of revision cycles per task is tracked alongside mini-project rubric scores and peer ratings. This pairing examines whether iteration depth predicts project quality — and if so, whether the relationship depends on the type of task (conceptual exploration vs. debugging vs. synthesis). The absence of a relationship would be an equally informative finding.

Data Use & Privacy

Reflections, session logs, and other course data are used to improve this course and to inform broader lessons about teaching with AI among School of Design & Science faculty and staff.

With a class of this size, truly anonymous data does not exist — project descriptions, interaction patterns, and research topics can identify individuals regardless of whether names are removed. The course therefore operates under the following principles:

  • Within the course, all collected data (session logs, reflections, peer reviews) informs instruction and feedback.
  • Any use of this data beyond the course — including research publications and partnership reporting — will be conducted under institutional ethical review. No externally shared data will be individually identifiable.
  • Students may opt out of external data use at any time with no effect on their grade or standing. Opting out means your data is used only for course administration and your own feedback — it is excluded from any research or external reporting.
  • Because small-cohort data carries inherent re-identification risk even when anonymized, external reporting will use techniques such as aggregation across multiple cohorts, suppression of unique cases, and composite descriptions rather than individual case studies. Students will be consulted before any external use of data that could plausibly be traced back to them.

Details on data handling and consent will be provided in Session 1.

Course Structure

The course is organized into four thematic phases (Weeks 1–7), followed by a studio period and final presentations (Weeks 8–12).

Session Format (~3 hours)

BlockDurationDescription
Lecture~1 hourConceptual foundations, readings discussion, guest speakers.
Tutorial~1 hourGuided, in-class exploration of interactive simulations with an AI assistant.
Studio~1 hourIndependent or group work on mini-projects; instructor support available.

Phases & Modules

PhaseWeeksThemeTutorialsMini-Project
01Onboarding & Collaboration Amongst DiversityT0-A, T0-B
12Environmental Sensing, Modeling & AgentsMP1 — Bayesian inference with fishing zones
23Reasoning, Judgment & Decision-MakingT2MP2 — Debug an epsilon-greedy learning agent
34–5Emergence & Multi-Agent SystemsT3-A, T3-BMP3 — Multi-agent cooperation & competition
45–7Digital Fairness & CultureMP4 — TBA
8–12Studio & Final PresentationsFinal Project

Assessment & Grading

Overall Grade

CategoryWeightComponents
Mini-Projects50%MP1 (×1), MP2 (×1), MP3 (×2), MP4 (×2)
Final Project50%Proposal (×3), Product (×5), Presentation (×1.5), Peer Review (×0.5)

Mini-project weights reflect increasing scope: MP1 and MP2 each count for one share, while MP3 and MP4 each count for two (6 shares total). Final project component weights reflect emphasis on the product itself, supported by the proposal, presentation, and peer evaluation (10 shares total).

Mini-Project Rubric

Each mini-project is assessed across five equally weighted components (20 points each). The specific components vary by module but follow a consistent pattern:

ComponentPointsWhat Is Evaluated
Conceptual Foundation20Predictions and priors before observing; evidence of independent thought.
Observation & Evidence20Systematic exploration; documenting what happened and why.
Deep Mechanism Understanding20Targeted experiments that test specific hypotheses about the system.
Strategic Application20Applying understanding under constraints (limited tools, resources, or time).
Synthesis & Reflection20Written summary, real-world analogy, code reading with AI, session logs.

Deliverables Per Mini-Project

  • Investigation workbook — predictions, observations, and analysis for each task.
  • Synthesis document — conceptual summary, recommendation, and real-world connection.
  • AI session logs.jsonl conversation files, automatically saved.
  • Verification evidence — screenshots or behavioral traces from the simulation.

Grade Benchmarks

GradeDescription
A+ / ExceptionalDeep understanding across all components; metacognitive reflection on AI collaboration and learning process.
A / MasteryStrong evidence of understanding; thoughtful experiments and clear written synthesis.
B / ProficiencySolid work on core tasks; systematic approach but may lack depth in synthesis or reflection.
C / MinimumCompleted required tasks; simulation runs but limited evidence of deeper understanding.

Best submissions show predictions that evolve, AI conversations that go beyond surface-level, and clear articulation of principles in the student's own words.

Tools & Setup

  • Code editor / IDE — any editor with AI assistant integration (terminal access required).
  • AI coding assistant — serves as your teaching assistant, debugger, and collaborator. Provided via API token.
  • Web browser — Chrome or Firefox; all simulations run directly in the browser via p5.js.
  • Python 3 — only needed for the local HTTP server (python3 -m http.server 8000).
  • Git — for version control and accessing course materials.

There is no npm, no build step, no package manager. All code runs as static HTML + ES modules.

Course Policies

AI Collaboration

AI use is expected and structured, not prohibited. An AI coding assistant is embedded in every module as a learning tool. Students are responsible for understanding and verifying all AI-assisted work. Session logs are reviewed to assess the quality of collaboration — not to police usage, but to reward genuine inquiry.

Academic Integrity

Using AI to generate answers you submit without understanding is a violation of academic integrity. Using AI to deepen your understanding, test hypotheses, and debug your thinking is exactly what this course teaches. The distinction is intellectual ownership: you must be able to explain and defend any work you submit.

Attendance & Participation

Attendance is expected for all sessions. Tutorials and studio time involve hands-on work that cannot easily be replicated independently. If you must miss a session, coordinate with your instructor in advance.

Late Work

Mini-project deliverables are due at the start of the presentation session for that phase. Late submissions may be accepted with prior arrangement but will be evaluated at the instructor's discretion.

Readings & References

Introduction

  • Weyl, E. G. et al. (2023). A Widening Gulf (Section 2-0). Plurality: The Future of Collaborative Technology and Democracy.

Phase 1 — Environmental Sensing & Agents

  • Marr, D. (1982). Vision, pp. 8–37.
  • von Uexküll, J. (1909). Umwelt und Innenwelt der Tiere, pp. 44–63, 86–91.
  • Clark, A. (1999). An embodied cognitive science?
  • Braitenberg, V. (1984). Vehicles: Experiments in Synthetic Psychology, pp. 1–9.

Phase 2 — Reasoning & Decision-Making

Phases 3 & 4 — Emergence, Multi-Agent, Fairness