Program - JSWORLD Conference

Program

  • Meet, Greet & City Tours

    Join us on Wednesday Night - day prior to the event for Networking, City Tours and Fun with the rest of the community!

  • Mathias Biilmann

    Agent Experience - the discipline like UX & DX, building products for Agents and making them easy to access and a great experience.

    By Mathias Biilmann

  • Anand Chowdhary

    How we test & deploy LLM prompts at FirstQuadrant

    By Anand Chowdhary

    The founder of FirstQuadrant, an AI sales platform funded by Y Combinator, offers an in-depth exploration of their internally developed system for prompt engineering with LLMs. The talk will provide a detailed walkthrough of the system's architecture, highlighting its capabilities in versioning, models, and testing methodologies. Key focus areas include the mechanism of comparing new prompt versions against historical data, the intricacies of implementing full logging, and the strategies for analyzing and optimizing token costs. This talk is tailored for a technical audience, aiming to share practical insights and techniques in prompt engineering that can be applied in other AI and machine learning contexts.

  • Liran Tal

    Giving Your Agentic Coding AI a Security Brain

    By Liran Tal

    AI can generate a week’s worth of code before lunch and just as quickly ship SSRF, RCE, and path traversal vulnerabilities into prod. Rules and “/security-review” prompts aren’t enough: they’re costly, brittle, and non-deterministic. Run them three times, get three answers. Meanwhile, who vets hallucinated npm packages as the agent installs them? Oh you’re running the agent with “--dangerously-skip-permissions”? Color me surprised, sigh. Well the good news is you don’t have to trade speed for security, let me show you how. This talk shows a concrete, developer-first pattern: learn how to use MCPs to give agents real security superpowers. We’ll wire in just-in-time package health checks and deterministic code reviews via a security MCP server, with clear contextually engineered details for your agent. You’ll leave with a better understanding of the security dangers relying on agentic coding tools alone and a reliable MCP-based agentic workflow to make AI coding fast and safely shippable.

  • Milica Aleksic

    React Native in Practice: Hard Lessons from Shipping Code

    By Milica Aleksic

    Productivity in tech teams is often discussed in terms of tools and processes, but in mobile development it is largely shaped by everyday engineering decisions. In this talk, I’ll share lessons from shipping and maintaining React Native apps in production. Using real-world examples, I’ll show how decisions around component reuse, performance optimization, upgrade strategy, and ownership directly affect team velocity. We’ll look at where React Native helps teams move faster, where it introduces friction, and how small technical choices can compound over time into either productivity gains or slowdowns.

  • Andrei Tazetdinov

    Killing Wasted Re-Renders with Production Hook Instrumentation

    By Andrei Tazetdinov

    Modern React and React Native applications often suffer from invisible performance problems: unnecessary re-renders, unstable dependency arrays, and effects that execute far more often than developers expect. In development, everything looks fine. In production, real users pay the price. In this talk, I’ll show how we instrumented React hooks at build time, measured their behavior in production, and discovered that nearly half of our component updates were unnecessary. By wrapping useEffect, useMemo, and render cycles with lightweight compile-time transforms, we collected real-world data without relying on DevTools or manual profiling. You’ll see how this instrumentation works under the hood, what surprising patterns emerged from production telemetry, and how these insights helped us significantly reduce wasted renders and improve real user experience. Instead of guessing where performance issues come from, we’ll learn how to observe React from the inside — safely, systematically, and at scale.

  • Dani Coll

    Hack Me If You Can: Uncovering Web Vulnerabilities

    By Dani Coll

    Do you lie awake at night wondering if your app could be compromised? Have you ever questioned how secure the apps you ship using your favorite framework really are? Do you know which types of vulnerabilities you might be exposed to when proper security systems are not in place? In this talk, we'll explore how some major companies were hacked in the past and dive into a demonstration of how our guinea pig web app is hacked to uncover the threats that could easily impact your own projects.

  • Adam Cowley

    UI in the age of AI

    By Adam Cowley

    When the backend can reason, what does that mean for the frontend? Let's look at how to build UIs that support reasoning and adapt to any task. The way we interact with software is changing. LLM-powered applications, with human-in-the-loop, are handling repetitive tasks that used to require forms and workflows. But bolting a chatbot onto your existing UI isn't enough - extracting structured data from natural language is fragile, adding frustration and friction for users. In this talk, we'll explore how tool-calling and protocols like MCP provide deterministic contracts with non-deterministic systems, what human-in-the-loop looks like when the UI adapts to the task at hand rather than forcing users through fixed workflows.

  • Paolo Ricciuti

    TMCP: a new way to build MCP servers in TS

    By Paolo Ricciuti

    The typescript SDK provider by the official model context protocol organization is nice and has a lot of features baked in. But it also has a few problems that makes it difficult to use with modern frameworks. TMCP is a brand new alternative sdk, fully featured with an aim to a lean, modern API and ease of use. In this talk we are gonna explore what those limitations are and take a look at how TMCP solves them.

  • OndĹ™ej Chrastina

    Sleep Better on Release Day: A Modern Testing Strategy for JavaScript SDKs and Components

    By Ondřej Chrastina

    At the beginning of my career, my Thursdays were defined by a single, painful ritual: manually clicking through every button in the DiGital Experience Platform (DXP) to catch regressions. Over time, we solved this by joining the industry-wide shift toward full automation. Later, managing dozens of SDKs and starters on GitHub at [Kontent.ai](http://kontent.ai/) taught me how to keep a distributed ecosystem consistent. Lately, however, I have been exploring the testing landscape at CKEditor, and it represents a whole new level of complexity. When providing an Open Source core alongside premium plugins, it is not just about testing the code itself; it is about validating a massive, plugin-based architecture where open modules interact with commercial ones. All running inside the variety of customer environments and browser versions and types. In this session, we will ignore the standard ""Testing Pyramid"" theory and dive straight into the practical strategies required to validate such a complex ecosystem. We will cover The Integration Matrix (testing across frameworks and bundlers), The Timeline Strategy (using Nightly builds to catch upstream browser engine breaks), and The AI Models Interoperability: how to automate tests for Generative AI features where ""exact match"" assertions are impossible.

  • The Annual JSWorld Afterparty

    THURSDAY 7 MAY - Join 2,000 JavaScript & Frontend Enthusiasts for the party of the year, make connections for life, dance to the DJ and sing your heart out during Karaoke.