Authentication is a core part of building any app. Until now, developers on Juno have relied on third-party providers like Internet Identity and NFID. Today we're providing a new option: Passkeys.
This new authentication option is available to all developers using the latest Juno SDK and requires the most recent version of your Satellite containers. You can now enable Passkeys alongside existing providers, and the JavaScript SDK has been updated to make authentication APIs more consistent across sign-in, sign-out, and session management.
Passkeys are a passwordless authentication method built into modern devices and browsers. They let users sign up and sign in using secure digital keys stored in iCloud Keychain, Google Password Manager, or directly in the browser with Face ID, Touch ID, or a simple device unlock instead of a password.
Under the hood, Passkeys rely on the WebAuthn standard and the web API that enables browsers and devices to create and use cryptographic credentials. Passkeys are essentially a user-friendly layer on top of WebAuthn.
When stored in a password manager like iCloud Keychain or Google Password Manager, passkeys sync across a user’s devices, making them more resilient, though this does require trusting the companies that provide those services. If stored only in the browser, however, they can be lost if the browser is reset or uninstalled.
The good news is that most modern platforms already encourage syncing passkeys across devices, which makes them convenient for everyday use, giving users a smooth and safe way to log into applications.
Each authentication method has its strengths and weaknesses. Passkeys provide a familiar, device-native login experience with Face ID, Touch ID, or device unlock, relying on either the browser or a password manager for persistence. Internet Identity and NFID, on the other hand, offer privacy-preserving flows aligned with the Internet Computer, but they are less familiar to mainstream users and involve switching context into a separate window.
In practice, many developers will probably combine Passkeys and Internet Identity side by side, as we do in the starter templates we provide.
Ultimately, the right choice depends on your audience and product goals, balancing usability, privacy, and ecosystem integration.
As you can notice, unlike with existing third-party providers, using Passkeys requires a distinct sign-up and sign-in flow. This is because the WebAuthn standard is designed so that an app cannot know in advance whether the user has an existing passkey, and this is intentional for privacy reasons. Users must therefore explicitly follow either the sign-up or sign-in path.
It is also worth noting that during sign-up, the user will be asked to use their authenticator twice:
once to create the passkey on their device
and once again to sign the session that can be used to interact with your Satellite.
Given these multiple steps, we added an onProgress callback to the various flows. This allows you to hook into the progression and update your UI, for example to show a loading state or step indicators while the user completes the flow.
import{ signUp }from"@junobuild/core"; awaitsignUp({ webauthn:{ options:{ onProgress:({ step, state })=>{ // You could update your UI here console.log("Progress:", step, state); } } } });
Previously, calling signIn() without arguments defaulted to Internet Identity. With the introduction of Passkeys, we decided to drop the default. From now on, you must explicitly specify which provider to use for each sign-in call. This makes the API more predictable and avoids hidden assumptions.
In earlier versions, providers could also be passed as class objects. To prevent inconsistencies and align with the variant pattern used across our tooling, providers (and their options) must now be passed through an object.
By default, calling signOut will automatically reload the page (window.location.reload) after a successful logout. This is a common pattern in sign-out flows that ensures the application restarts from a clean state.
If you wish to opt out, the library still clears its internal state and authentication before the reload, and you can use the windowReload option set to false:
To make the API more consistent with the industry standards, we introduced a new method called onAuthStateChange. It replaces authSubscribe, which is now marked as deprecated but will continue to work for the time being.
The behavior remains the same: you can reactively track when a user signs in or out, and unsubscribe when you no longer need updates.
Passkeys are now available, alongside updates to the authentication JS APIs. With passwordless sign-up and sign-in built into modern devices, your users get a smoother experience.
Check out the updated documentation for details on:
One of the principles that shaped Juno from day one was the idea of building apps with full ownership — no hidden infrastructure, no opaque servers.
No hypocrisy either.
If developers are encouraged to deploy code in containers they control, it feels inconsistent to rely on centralized infrastructure — like AWS or other Web2 cloud providers — to manage deployment pipelines or run the platform. With the exception of email notifications, Juno currently runs entirely on the Internet Computer — and that's a deliberate choice.
That doesn't mean being stubborn for the sake of it. It just means trying to push things forward without falling back on the old way unless absolutely necessary.
At the same time, developer experience matters — a lot. It shouldn't take a degree in DevOps to ship a backend function. Developers who would typically reach for a serverless option should be able to do so here too. And for those who prefer to stay local, it shouldn't feel like a downgrade — no one should be forced into CI automation if they don't want to.
That's why the new GitHub Actions for serverless functions are now generally available — for those who want automation, not obligation.
Build serverless functions written in TypeScript or Rust
Automatically publish them to a Satellite
Optionally propose or directly apply upgrades
All within a GitHub Actions workflow. No manual builds, no extra setup — just code, commit, and push.
This makes it easier to fit Juno into an existing CI/CD pipeline or start a new one from scratch. The logic is bundled, metadata is embedded, and the container is ready to run.
You might ask yourself: "But what about the risk of giving CI full control over my infrastructure?"
That's where the improved access key (previously named "Controllers") roles come in.
Instead of handing over the master key, you give CI just enough access to do its job — and nothing more.
Here's how the roles break down in plain terms:
Administrator – Full control. Can deploy, upgrade, stop, or delete any module. Powerful, but risky for automation. Might be useful if you're spinning up test environments frequently.
Editor (Write) – Ideal for CI pipelines that deploy frontend assets or publish serverless functions. Can't upgrade those or stop and delete modules. A good default.
Submitter 🆕 – The safest option. Can propose changes but not apply them. Someone still needs to review and approve in the Console or CLI. No surprises, no accidents.
Use Editor for most CI tasks — it gives you automation without opening the blast radius.
Prefer an extra layer of review? Go with Submitter and keep a human in the loop.
Nothing changes in the approach for developers who prefer local development. The CLI remains a first-class tool for building and deploying.
All the new capabilities — from publishing functions to proposing or applying upgrades — are available not just in GitHub Actions or the Console UI, but also fully supported in the CLI.
In fact, the CLI has been improved with a neat addition: you can now append --mode development to interact with the emulator. This allows you to fully mimic production behavior while developing locally. And of course, you can also use any mode to target any environment.
juno functions upgrade --mode staging juno deploy --mode development
While building serverless functions was never an issue, enabling GitHub Actions to publish and deploy without giving away full control introduced a challenge. How do you let CI push code without handing it the keys to everything?
That's where the idea of a sort of CDN came in.
Each Satellite now has a reserved collection called #_juno/releases. It's like a staging area where CI can submit new WASM containers or frontend assets. If the access key has enough privileges, the submission is deployed right away. If not, it's stored as a pending change — waiting for someone to approve it manually via the Console or CLI.
This builds on the change-based workflow that was added to the Console last year. Funny enough, it brought the Console so close to being a Satellite itself that it became… basically a meta example of what you can build with Juno.
And here's the cherry on top: because there's now a CDN tracking versions, developers can rollback or switch between different function versions more easily. A new CDN tab in the Console UI (under Functions) gives you access to all past versions and history.
Frontend deployment now benefits from the same change-based workflow. By default, when you run juno deploy or trigger a GitHub Action, the assets are submitted as pending changes — and applied automatically (if the access key allows it).
Want to skip that workflow? You still can. The immediate deployment path remains available — handy if something fails, or if you just prefer to keep things simple.
That's because the GitHub Action now comes in two flavors:
junobuild/juno-action or junobuild/juno-action@slim – perfect for common use cases like deploying frontend assets or running simpler CLI tasks. No serverless build dependencies included, so it's faster and more "lightweight" (relatively, it still uses Docker underneath...).
junobuild/juno-action@full – includes everything you need to build and publish serverless functions, with Rust and TypeScript support. It's heavier, but it does the job end to end.
This release isn't just about smoother deployments — it's a step toward making Juno feel like real infrastructure. Though, what is “real infrastructure” anyway? Whatever it is, this one doesn't come with the usual baggage.
Developers get to choose how they ship — locally or through CI. They get to decide what gets deployed and who can do it. They're not forced to rely on some big tech platform for their infra if they don't want to. And thanks to the new CDN and access control model, fast iteration and tight control can finally go hand in hand.
If you've been waiting for a way to ship backend logic without giving up on decentralization — or if you just like things working smoothly — this one's for you.
Go ahead.
Build it.
Push it.
Submit it.
Ship it.
To infinity and beyond,
David
Stay connected with Juno by following us on X/Twitter.
One of the goals with Juno has always been to make building decentralized, secure apps feel like something you're already used to. No weird mental models. No boilerplate-heavy magic. Just code that does what you expect, without touching infrastructure.
And with this release, we're taking another step in that direction:
You can now write serverless functions in TypeScript.
If you're a JavaScript developer, you can define backend behavior right inside your container. It runs in a secure, isolated environment with access to the same hooks and assertions you'd use in a typical Juno Satellite.
No need to manage infrastructure. No need to deploy a separate service. Just write a function, and Juno takes care of the rest.
Cherry on top: the structure mirrors the Rust implementation, so everything from lifecycle to data handling feels consistent. Switching between the two, or migrating later, is smooth and intuitive.
Rust is still the best choice for performance-heavy apps. That's not changing.
But let's be real: sometimes you just want to ship something quickly. Maybe it's a prototype. Maybe it's a feature you want to test in production. Or maybe you just want to stay in the JavaScript world because it's what you know best.
Now you can.
You get most of the same tools, like:
Hooks that react to document or asset events (onSetDoc, onDeleteAsset, etc.)
Assertions to validate operations (assertSetDoc, etc.)
Utility functions to handle documents, storage, and even call other canisters on ICP
The JavaScript runtime is intentionally lightweight. While it doesn't include full Node.js support, we're adding polyfills gradually based on real-world needs. Things like console.log, TextEncoder, Blob, and even Math.random — already covered.
The approach to writing serverless functions in Rust and TypeScript is aligned by design. That means if you outgrow your TS functions, migrating to Rust won't feel like starting from scratch. The APIs, structure, and flow all carry over.
Alongside TypeScript support, we've rethought the local development experience.
Instead of providing a partial local environment, the mindset shifted to mimicking production as closely as possible.
You still get a self-contained image with your Satellite, but now you also get the full Console UI included. That means you can manage and test your project locally just like you would on mainnet.
Here's the beautiful part: even though your serverless functions are written in TypeScript, they're bundled and embedded into a Satellite module that's still compiled in Rust behind the scenes.
But you don't need to install Rust. Or Cargo. Or ic-wasm. Or anything that feels complicated or overly specific.
All you need is Node.js and Docker. The container takes care of the rest: building, bundling, embedding metadata and gives you a ready-to-run Satellite that runs locally and is ready to deploy to production.
In short: just code your functions. The container does the heavy lifting.
This isn’t just a feature announcement — serverless functions in TypeScript are already live and powering real functionality.
I used them to build the ICP-to-cycles swap on cycles.watch, including all the backend logic and assertions. The whole process was documented over a few livestreams, from setup to deployment.
If you're curious, the code is on GitHub, and there’s a playlist on YouTube if you want to follow along and see how it all came together.
We've put together docs and guides to help you get started. If you're already using the Juno CLI, you're just one juno dev eject away from writing your first function or start fresh with npm create juno@latest.
To infinite and beyond,
David
Stay connected with Juno by following us on X/Twitter.
Until now, running a local project meant spinning up an emulator with just enough to build with a single default Satellite container for your app.
That worked. But it wasn’t the full picture.
With the latest changes, local development now mirrors the production environment much more closely. You don’t just get a simplified setup — you get the actual Console UI, orchestration logic, and almost a full infrastructure that behaves like the real thing.
This shift brings something most cloud serverless platforms don't offer: production-level parity, right on your machine.
Local development isn’t just about getting things to run. It’s about understanding how your project behaves, how it scales, and how it integrates with the platform around it.
With this shift, you build with confidence that what works locally will work in production. You don’t need to guess how things will behave once deployed — you’re already working in an environment that mirrors it closely.
It also helps you gradually get familiar with the tools that matter, like the Console UI. You learn to use the same workflows, patterns, and orchestration logic that apply when your app goes live.
This removes a lot of friction when switching environments. There's less surprise, less debugging, and a lot more flow.
It’s local development, but it finally feels like the real thing.
That’s why the lightweight junobuild/satellite image still exists — and still works just as it always has. It’s ideal for CI pipelines, isolated app testing, or local startup when you don’t need the Console and more infrastructure.
This shift in approach isn’t a breaking change. It adds a new default, but doesn’t remove what was already there.
Looking ahead, there's an intention to simplify scripting even further by allowing Datastore and Storage definitions directly in the main juno.config file. The goal is to eventually phase out juno.dev.config and unify configuration — but that’s for the future.
For now, everything remains compatible. You choose what fits best.
If you already have a project configured for local development and want to switch to the new approach:
Update the CLI:
npm i -g @junobuild/cli
Remove your juno.dev.config.ts (or the JavaScript or JSON equivalent)
Update your docker-compose.yml to use the junobuild/skylab image (adjust paths as needed for your project):
services: juno-skylab: image: junobuild/skylab:latest ports: # Local replica used to simulate execution - 5987:5987 # Little admin server (e.g. to transfer ICP from the ledger) - 5999:5999 # Console UI (like https://console.juno.build) - 5866:5866 volumes: # Persistent volume to store internal state - juno_skylab:/juno/.juno # Your Juno configuration file. # Notably used to provide your development Satellite ID to the emulator. - ./juno.config.mjs:/juno/juno.config.mjs # Shared folder for deploying and hot-reloading serverless functions # For example, when building functions in TypeScript, the output `.mjs` files are placed here. # The container then bundles them into your Satellite WASM (also placed here), # and automatically upgrades the environment. - ./target/deploy:/juno/target/deploy/ volumes: juno_skylab:
That’s it — you’re good to go.
✅ Closing Thoughts
This shift removes a lot of friction between idea and execution.
You build in the same structure, use the same tools, and follow the same workflows you'd use in production — but locally, and instantly.
Local development finally feels like you're already in production, just without the pressure.
Stay connected with Juno by following us on X/Twitter.
Why Data Validation Matters in Decentralized Apps
Data validation is always important. However, web3 comes with its own set of challenges which makes validation an even more important part of building trustworthy apps:
No Central Administrator: Unlike traditional systems, decentralized apps have no admin backdoor to fix data issues
Limited Data Access: Developers often can't directly access or examine user data due to encryption and/or privacy
Data Immutability: Once written to the blockchain, data can be difficult or impossible to modify
Client-Side Vulnerability: Front-end validation can be bypassed by determined users (like in web2)
Security Risks: Invalid or malicious data can compromise application integrity and user trust
Getting validation right from the start is not just a best practice—it's essential for the secure and reliable operation of your application.
on_set_doc is a Hook that is triggered after a document has been written to the database. It offers a way to execute custom logic whenever data is added or updated to a collection using the setDoc function executed on the client side.
This allows for many use-cases, even for certain types of validation, but this hook runs after the data has already been written.
// Example of validation and cleanup in on_set_doc #[on_set_doc(collections =["users"])] fnon_set_doc(context:OnSetDocContext)->Result<(),String>{ // Step 1: Get all context data we'll need upfront let collection = context.data.collection; let key = context.data.key; let doc =&context.data.data.after;// Reference to the full document after update let user_data:UserData=decode_doc_data(&doc.data)?;// Decoded custom data from the document // Step 2: Validate the data if user_data.username.len()<3{ // Step 3: If validation fails, delete the document using low-level store function delete_doc_store( ic_cdk::id(),// Use Satellite's Principal ID since this is a system operation collection, key, DelDoc{ version:Some(doc.version),// Use the version from our doc reference } )?; // Log the error instead of returning it to avoid trapping ic_cdk::print("Username must be at least 3 characters"); } Ok(()) }
Issues:
The on_set_doc hook only executes AFTER data is already written to the database, which is not ideal for validation.
Since it only happens after the data is already written, it can lead to unwanted effects. For example: let's say a new data needs to be added to some list. If it is invalid, we can't add it to the list, but since the hook runs after the data is written, the data will be added to the list anyway before we can reject them. This adds unwanted complexity to your code, forcing the developer to manage multiple on_set_doc hooks in the same function.
Overhead: invalid data is written (costly operation) then might be rejected and need to be deleted (another costly operation)
Not ideal for validation since it can't prevent invalid writes
Can't return success/error messages to the frontend
There are also other Juno hooks, but in general, they provide a way to execute custom logic whenever data is added, modified, or deleted from a Juno datastore collection.
Custom Endpoints are Juno serverless functions that expose new API endpoints through Candid (the Internet Computer's interface description language). They provide a validation layer through custom API routes before data reaches Juno's datastore, allowing for complex multi-step operations with custom validation logic.
caution
This example is provided as-is and is intended for demonstration purposes only. It does not include comprehensive security validations.
usejunobuild_satellite::{set_doc_store,SetDoc};// SetDoc is the struct type for document creation/updates usejunobuild_utils::encode_doc_data; useic_cdk::caller; usecandid::{CandidType,Deserialize}; // Simple user data structure #[derive(CandidType, Deserialize)] structUserData{ username:String, } // Custom endpoint for user creation with basic validation #[ic_cdk_macros::update] asyncfncreate_user(key:String, user_data:UserData)->Result<(),String>{ // Step 1: Validate username (only alphanumeric characters) if!user_data.username.chars().all(|c| c.is_alphanumeric()){ returnErr("Username must contain only letters and numbers".to_string()); } // Step 2: Create and store document // First encode our data into a blob that Juno can store into the 'data' field let encoded_data =encode_doc_data(&user_data) .map_err(|e|format!("Failed to encode user data: {}", e))?; // Create a SetDoc instance - this is the required format for setting documents in Juno // SetDoc contains only what we want to store - Juno handles all metadata: // - created_at/updated_at timestamps // - owner (based on caller's Principal) // - version management let doc =SetDoc{ data: encoded_data,// The actual data we want to store (as encoded blob) description:None,// Optional field for filtering/searching version:None// None for new docs, Some(version) for updates }; // Use set_doc_store to save the document // This is Juno's low-level storage function that: // 1. Takes ownership of the document (caller's Principal) // 2. Adds timestamps (created_at, updated_at) // 3. Handles versioning // 4. Stores the document in the specified collection set_doc_store( caller(),// Who is creating this document String::from("users"),// Which collection to store in key,// The document's unique key doc // The SetDoc we prepared above ).await }
While custom endpoints offer great flexibility for building specialized workflows, they introduce important security considerations. A key issue is that the original setDoc endpoint remains accessible — meaning users can, to some extension, still bypass your custom validation logic by calling the standard Juno SDK methods directly from the frontend. As a result, even if you've added strict validation in your custom endpoints, the underlying collection can still be modified unless you take additional steps to restrict access.
The common workaround is to restrict the datastore collection to "controller" access so the public can't write to it directly, forcing users to interact only through your custom functions. However, this approach creates its own problems:
All documents will now be "owned" by the controller, not individual users
You lose Juno's built-in permission system for user-specific data access
You'll need to build an entirely new permission system from scratch
This creates a complex, error-prone "hacky workaround" instead of using Juno as designed
Key Limitations:
Original setDoc endpoint remains accessible to users
Users can bypass custom endpoint entirely by using Juno's default endpoints directly (setDoc, setDocs, etc)
Restricting collections to controller access breaks Juno's permission model
Requires building a custom permission system from scratch
The assert_set_doc hook runs BEFORE any data is written to the database, allowing you to validate and reject invalid submissions immediately. This is the most secure validation method in Juno as it integrates directly with the core data storage mechanism.
When a user calls setDoc through the Juno SDK, the assert_set_doc hook is automatically triggered before any data is written to the blockchain. If your validation logic returns an error, the entire operation is cancelled and any changes are rolled back, and the error is returned to the frontend. This ensures invalid data never reaches your datastore in the first place, saving computational resources and maintaining data integrity.
Unlike other approaches, assert_set_doc hooks:
Cannot be bypassed by end users
Integrate seamlessly with Juno's permission model
Allow users to continue using the standard Juno SDK
Keep validation logic directly in your data model
Conserve blockchain resources by validating before storage
Can reject invalid data with descriptive error messages that flow back to the frontend (unlike on_set_doc which runs after storage and can't return validation errors to users)
// Simple assert_set_doc example #[assert_set_doc(collections =["users"])] fnassert_set_doc(context:AssertSetDocContext)->Result<(),String>{ match context.data.collection.as_str(){ "users"=>{ // Access username from the document let data = context.data.data.proposed.data.as_object() .ok_or("Invalid data format")?; let username = data.get("username") .and_then(|v| v.as_str()) .ok_or("Username is required")?; // Validate username if username.len()<3{ returnErr("Username must be at least 3 characters".to_string()); } Ok(()) }, _ =>Ok(()) } }
Key Advantages:
Always runs BEFORE data is written - prevents invalid data entirely
Zero overhead - validation happens in memory before expensive on-chain operations
Cannot be bypassed or circumvented
Prevents invalid data from ever being written
Conserves resources by validating before storage
Integrates directly with Juno's permission model
Keeps validation (assert_set_doc) separate from business logic triggers (on_set_doc)
usejunobuild_satellite::{ set_doc, list_docs, decode_doc_data, encode_doc_data, Document,ListParams,ListMatcher }; useic_cdk::api::time; usestd::collections::HashMap; #[assert_set_doc(collections =["users","votes","tags"])] fnassert_set_doc(context:AssertSetDocContext)->Result<(),String>{ match context.data.collection.as_str(){ "users"=>validate_user_document(&context), "votes"=>validate_vote_document(&context), "tags"=>validate_tag_document(&context), _ =>Err(format!("Unknown collection: {}", context.data.collection)) } } fnvalidate_user_document(context:&AssertSetDocContext)->Result<(),String>{ // Decode and validate the user data structure let user_data:UserData=decode_doc_data(&context.data.data.proposed.data) .map_err(|e|format!("Invalid user data format: {}", e))?; // Validate username format (3-20 chars, alphanumeric + limited symbols) if!is_valid_username(&user_data.username){ returnErr("Username must be 3-20 characters and contain only letters, numbers, and underscores".to_string()); } // Check username uniqueness by searching existing documents let search_pattern =format!("username={};", user_data.username.to_lowercase()); let existing_users =list_docs( String::from("users"), ListParams{ matcher:Some(ListMatcher{ description:Some(search_pattern), ..Default::default() }), ..Default::default() }, ); // If this is an update operation, exclude the current document let is_update = context.data.data.before.is_some(); for(doc_key, _)in existing_users.items { if is_update && doc_key == context.data.key { continue; } returnErr(format!("Username '{}' is already taken", user_data.username)); } Ok(()) } fnvalidate_vote_document(context:&AssertSetDocContext)->Result<(),String>{ // Decode vote data let vote_data:VoteData=decode_doc_data(&context.data.data.proposed.data) .map_err(|e|format!("Invalid vote data format: {}", e))?; // Validate vote value constraints if vote_data.value <-1.0|| vote_data.value >1.0{ returnErr(format!("Vote value must be -1, 0, or 1 (got: {})", vote_data.value)); } // Validate vote weight constraints if vote_data.weight <0.0|| vote_data.weight >1.0{ returnErr(format!("Vote weight must be between 0.0 and 1.0 (got: {})", vote_data.weight)); } // Validate tag exists let tag_params =ListParams{ matcher:Some(ListMatcher{ key:Some(vote_data.tag_key.clone()), ..Default::default() }), ..Default::default() }; let existing_tags =list_docs(String::from("tags"), tag_params); if existing_tags.items.is_empty(){ returnErr(format!("Tag not found: {}", vote_data.tag_key)); } // Prevent self-voting if vote_data.author_key == vote_data.target_key { returnErr("Users cannot vote on themselves".to_string()); } Ok(()) } fnvalidate_tag_document(context:&AssertSetDocContext)->Result<(),String>{ // Decode tag data let tag_data:TagData=decode_doc_data(&context.data.data.proposed.data) .map_err(|e|format!("Invalid tag data format: {}", e))?; // Validate tag name format and uniqueness if!is_valid_tag_name(&tag_data.name){ returnErr("Tag name must be 3-50 characters and contain only letters, numbers, and underscores".to_string()); } // Check tag name uniqueness let search_pattern =format!("name={};", tag_data.name.to_lowercase()); let existing_tags =list_docs( String::from("tags"), ListParams{ matcher:Some(ListMatcher{ description:Some(search_pattern), ..Default::default() }), ..Default::default() }, ); let is_update = context.data.data.before.is_some(); for(doc_key, _)in existing_tags.items { if is_update && doc_key == context.data.key { continue; } returnErr(format!("Tag name '{}' is already taken", tag_data.name)); } // Validate description length if tag_data.description.len()>1024{ returnErr(format!( "Tag description cannot exceed 1024 characters (current length: {})", tag_data.description.len() )); } // Validate time periods validate_time_periods(&tag_data.time_periods)?; // Validate vote reward if tag_data.vote_reward <0.0|| tag_data.vote_reward >1.0{ returnErr(format!( "Vote reward must be between 0.0 and 1.0 (got: {})", tag_data.vote_reward )); } Ok(()) } fnvalidate_time_periods(periods:&[TimePeriod])->Result<(),String>{ if periods.is_empty(){ returnErr("Tag must have at least 1 time period".to_string()); } if periods.len()>10{ returnErr(format!( "Tag cannot have more than 10 time periods (got: {})", periods.len() )); } // Last period must be "infinity" (999 months) let last_period = periods.last().unwrap(); if last_period.months !=999{ returnErr(format!( "Last period must be 999 months (got: {})", last_period.months )); } // Validate each period's configuration for(i, period)in periods.iter().enumerate(){ // Validate multiplier range (0.05 to 10.0) if period.multiplier <0.05|| period.multiplier >10.0{ returnErr(format!( "Multiplier for period {} must be between 0.05 and 10.0 (got: {})", i +1, period.multiplier )); } // Validate multiplier step increments (0.05) let multiplier_int =(period.multiplier *100.0).round(); let remainder = multiplier_int %5.0; if remainder >0.000001{ returnErr(format!( "Multiplier for period {} must use 0.05 step increments (got: {})", i +1, period.multiplier )); } // Validate month duration if period.months ==0{ returnErr(format!( "Months for period {} must be greater than 0 (got: {})", i +1, period.months )); } } Ok(()) }
Remember: Security is about preventing unauthorized or invalid operations, not just making them difficult. assert_set_doc hooks provide the only guaranteed way to validate all data operations in Juno's Datastore.
✍️ This blog post was contributed by Fairtale, creators of Solutio.
Solutio is a new kind of platform where users crowdfund the software they need, and developers earn by building it. Instead of waiting for maintainers or hiring devs alone, communities can come together to fund bug fixes, new features, or even entire tools — paying only when the result meets their expectations.
November’s been an exciting month, especially since I’ve officially started working full-time on Juno — thanks to the recently announced funding! This shift has already led to delivering some fantastic new features for developers, like automated backups (finally!!!), support for large WASM modules, the ability to buy cycles with Stripe, and a few other goodies.
These updates are all about making development smoother and more efficient, whether you’re building dapps, smart contracts, or managing your projects. Let’s dive into what’s new!
To kick things off, I’d like to highlight the introduction of backups—a feature I’ve been waiting for forever!
This addition brings a crucial layer of security for developers, letting you safeguard your modules and restore them whenever needed.
Here’s how it works: Currently, one backup per module is supported. You can manage backups manually via both the Console UI and the CLI, with options to create, restore, or delete them. Additionally, backups are automatically created during the upgrade process, taking a snapshot before transitioning to a new version. For those who prefer full control, advanced options let you skip creating a backup or avoid overwriting an existing one.
For anyone who, like me, feels a bit tense whenever it’s time to execute an upgrade, this feature is a huge relief. It’s really a great addition!
Getting cycles has become more straightforward, particularly for newcomers and non-crypto-native users, with the ability to buy cycles directly through Stripe, thanks to our friends at cycle.express.
With this integration, developers can simply make a payment, and the cycles are added directly to their module.
This was both a useful feature, as it makes it easy to transfer ICP from OISY to the developer's wallet on Juno, and an opportunity for me to try out the integration with various ICRC standards I implemented for the foundation.
I also used the opportunity to improve the UI/UX of the Receive feature by displaying wallet addresses with a QR code. This update wraps up a few related tasks, such as adding support for sending ICP to the outside world.
Support for larger WASM modules (over 2MB) has been added. While none of Juno's stock modules—such as Satellites, Mission Control, or Orbiter (Analytics)—come close to this size when gzipped, this limit could quickly be reached by developers using serverless functions.
By extending this limit, developers have more flexibility to embed additional third-party libraries and expand their module capabilities.
This support has been implemented across the CLI, the Console UI, and even local development environments using Docker, ensuring a consistent experience for all workflows.
Until recently, new Satellites launched lacked a default page for web hosting. This meant that developers opening their project right after creation would just see a blank page in the browser.
That’s why every new Satellite now comes with a sleek, informative default web page—delivering a great first impression right out of the box! ✨
Another handy tool introduced this month is support for pre- and post-deploy scripts in the CLI. With this feature, developers can now define a list of commands to be executed at specific stages of the deployment process.
The pre-deploy scripts are perfect for automating tasks like:
Compiling assets.
Running tests or linters.
Preparing production-ready files.
Likewise, post-deploy scripts come in handy for follow-up tasks, such as:
Sending notifications or alerts to administrators.
Cleaning up temporary files.
Logging deployment information for auditing.
import{ defineConfig }from"@junobuild/config"; /** @type{import('@junobuild/config').JunoConfig} */ exportdefaultdefineConfig({ satellite:{ id:"ck4tp-aaaaa-aaaaa-abbbb-cai", source:"build", predeploy:["npm run lint","npm run build"], postdeploy:["node hello.mjs"] } });
Maybe not the most groundbreaking update, but the dark theme got even darker. 🧛♂️🦇 Perfect for those late-night coding sessions—or if you just enjoy the vibe!
Another area that saw improvement is the documentation. I aimed to make it more intuitive and useful for both newcomers and experienced developers. That’s why I revamped the guides section. Now, when you visit, you’ll be greeted with a simple question: “What are you looking to do? Build or Host?” 🎯. This approach should hopefully make onboarding smoother and more straightforward for developers.
The CLI documentation also received an upgrade. Updating it manually was a hassle, so I automated the process. Now, CLI help commands generate markdown files that are automatically embedded into the website every week. No more manual updates for me, and it’s always up to date for you! 😄
I also dedicated time to documenting all the configuration options in detail, ensuring every setting is clearly explained.
And as a finishing touch, I refreshed the landing page. 👨🎨
I hope these features get you as excited as they got me! I’m already looking forward to what’s next. Speak soon for more updates!
David
Stay connected with Juno by following us on X/Twitter.
As you may know, I recently proposed transforming Juno into a Decentralized Autonomous Organization through an SNS swap. Unfortunately, it didn’t reach its funding goal, so Juno didn’t become a DAO.
After the failure, three options came to mind: retrying the ICO with a lower target, continuing to hack as an indie project for a while, or simply killing it.
In the days that followed, I also received a few other options, including interest from venture capitalists for potential seed funding which wasn’t an option for me.
Then, something unexpected happened:
The DFINITY foundation’s CTO, Jan Camenisch, reached out and proposed an alternative: funding the project through 2025.
I took a few days to consider the offer and ultimately accepted.
This support is a tremendous vote of confidence in Juno’s potential and importance within the ecosystem.
It’s worth emphasizing that the foundation’s support comes with no strings attached. They do not receive any stake in Juno, have no preferential treatment, and will not influence decisions. Should I ever consider another SNS DAO or any other funding route in the future, the foundation would have no special allocation or shares. This remains my project, and I am the sole decision-maker and controller.
This support also strengthens the relationship between Juno and the foundation, allowing us to stay in close contact to discuss the roadmap. It’s an arrangement that respects autonomy while fostering collaboration to advance the Internet Computer. As they say, it takes two to tango.
This funding opens up a world of possibilities and marks the first time I’ll work 100% on a project I created. I’m thrilled to continue building Juno as a resource that makes decentralized development accessible and impactful for everyone.
Obviously, while Juno remains under my sole ownership for now, I still believe that Juno should eventually become a DAO. Promoting full control for developers while retaining centralized ownership would be paradoxical. When the time is right, a DAO will ensure that Juno’s growth, security, and transparency are upheld through community-driven governance.
Thank you to everyone who believed in Juno through the SNS campaign and beyond 🙏💙. Your support has been invaluable, and this new phase wouldn’t be possible without you. Here’s to what lies ahead—a new chapter indeed.
To infinity and beyond,
David
Stay connected with Juno by following us on X/Twitter.
The SNS DAO on the Internet Computer failed on Saturday, October 12, 2024 (ICP swap data). As a result, Juno did not become a Decentralized Autonomous Organization (DAO).
Hey everyone 👋
I hope you’re all doing well! I’m excited to share some big news with you today. Over the next few weeks, we’re taking some significant steps toward shaping the future of Juno, and I wanted to keep you in the loop about what’s coming.
As you may know, Juno is a blockchain-as-a-service ecosystem that empowers developers to build decentralized apps efficiently. One of its strengths is that it gives developers full and sole control over their work. For this reason, it would be paradoxical to continue operating the platform with a centralized model—i.e., with me being the sole controller of services, such as the administration console or our CDN. That’s why I’m thrilled to unveil that, in the upcoming weeks, I’m aiming to fix this bias by proposing that Juno becomes a Decentralized Autonomous Organization (DAO).
While this potential shift is on the horizon, there are a few key steps you can take to stay informed and involved in the process. Here’s how you can help shape the future of developing on Web3:
To ensure you don’t miss any crucial updates, I encourage you to sign up for our newsletter. The journey to proposing a DAO and making it a reality involves multiple steps, each requiring your participation. By signing up, you’ll receive timely notifications whenever there’s an opportunity to get involved and make a real impact.
The white paper has been updated to continue presenting the vision; however, the tokenomics aspect has been notably removed, as it is no longer relevant following the failure of the SNS DAO.
I’ve put together a white paper that outlines the reasoning and vision I have for a Juno Build DAO. I highly recommend giving it a read to fully understand what I’m aiming to achieve.
Questions are always welcome at any time, but if you’re looking to engage directly, I’ll be hosting a Juno Live session on 9 September at 3:00 PM CET. Join the livestream on YouTube to interact in real-time.
The proposal was approved and executed on September 26, 2024.
While I typically avoid relying on third parties for core features, transforming Juno into a DAO without leveraging such services would be an immense task. That’s why I’m proposing to use the Internet Computer’s built-in solution for creating and maintaining DAOs, known as SNS.
To kickstart the process of transforming our ecosystem, this framework requires submitting a proposal to the Internet Computer’s governance, known as NNS. This step ensures a decentralized and democratic process. It also prepares for the handover of control of the smart contracts and allows all participants to review the parameters involved.
Once this proposal is live, your voice will be crucial! You’ll have the opportunity to vote on whether to accept or reject it.
Please note that the following does not constitute financial advice.
If the proposal is approved, an initial decentralization swap will be kicked off. The goal here is to raise the initial funds for the DAO and to decentralize the voting power. Think of it like crowdfunding, where people contribute ICP tokens. In return, participants are rewarded with staked tokens, giving them a share of the DAO's voting power.
For the swap to succeed, it requires at least 100 participants and 200,000 ICP tokens. Otherwise, the entire journey of transforming Juno into a DAO fails. So, if you’re excited about being part of this adventure, this could be the step where you make a real difference — if you decide on your own will to do so.
If the swap fails, it will mark the beginning of the end. While the platform won’t be deprecated immediately, I will gradually phase it out over the course of, let's say, a year. During this time, Juno will stop accepting new developers, and I will no longer develop new features, promote the eco-system, or invest in it beyond maintenance.
For those already using Juno, I want to reassure you that I won’t leave you stranded. I’m committed to offering support to help you transition and find suitable alternatives. I’m not, I hope, that much of an a-hole. I try to maintain good karma.
On a personal note, I would also be deprecating all of my personal projects, such as proposals.network, as I have no intention of using any developer tooling other than Juno for my own Web3 projects.
If the swap is successful, hooray! Juno will officially become a DAO, allowing you to actively participate in the governance of the project and start using the new JUNOBUILD token, among other exciting developments.
This will mark the beginning of an exciting new chapter, with the community at the heart of Juno's future.
Memecoins are starting to gain significant traction. Some of these tokens, such as Windowge98, Damonic Welleams, Wumbo, Spellkaster and $stik, have reached high prices and attracted many retail investors into the ecosystem. Now, you may be wondering how these meme tokens were launched. In this article, we will walk you through all the steps you need to follow in order to create your own memecoin project.
From creating the token smart contract (canister) to building a marketing website using Juno, and finally launching the token on ICPSwap, a major decentralized exchange (DEX) on ICP, we've got you covered.
We will also provide useful tips to ensure your memecoin project is successful. By the end of this article, you will have all the information needed to launch your token.
important
This article is for educational purposes only and is not financial advice of any form.
The Internet Computer (ICP) is a blockchain-based platform that aims to create a new type of internet, one that is decentralized, secure, and scalable. Developed, among others, by the DFINITY Foundation, the Internet Computer is designed to serve as a global public compute infrastructure, allowing developers to build and deploy decentralized applications (dApps) and services directly on the blockchain. Learn more about ICP
Juno is a blockchain-as-a-service (“blockchainless”) platform that empowers developers to build decentralized apps efficiently. Similar to Web2 cloud service platforms but with significant improvements, it offers a comprehensive toolkit to scaffold secure and efficient projects running on the blockchain.
In short, Juno is the Google Firebase alternative for Web3.
There are simpler ways to launch your own token that do not involve scripting, such as using no-code platforms like ICTO, ICPEx or ICPI.
However, since Juno is dedicated to providing developers with full ownership without compromise, this tutorial showcases an approach that aligns with our core values.
If you prefer to use one of those services, that's cool. Some of those also share these values; we just suggest you do your own research before making a decision.
And who knows, maybe in the future, Juno itself will make launching ledgers to the moon easy too! 😉
To deploy a ledger for your token proceed as following:
Make sure you have the dfx CLI installed on your machine. If not, follow this guide to complete the installation.
Creating a canister requires cycles, which measure and pay for resources like memory, storage, and compute power. Follow this guide to load cycles on your machine for deploying your ledger.
The following steps assume that you have cycles on your machine
On your computer, make an empty folder and name it myToken, and open it in your favorite editor
Create a file inside the folder and name it dfx.jsonpaste the code below
Next, we are going to define some parameters for our token and prepare the script for deployment.
Create a new file named deploy.sh and paste the following code:
#!/usr/bin/env bash # Token settings TOKEN_NAME="FROGIE" TOKEN_SYMBOL="FRG" TRANSFER_FEE=10000 PRE_MINTED_TOKENS=100_000_000_00_000_000 FEATURE_FLAGS=true TRIGGER_THRESHOLD=2000 CYCLE_FOR_ARCHIVE_CREATION=10000000000000 NUM_OF_BLOCK_TO_ARCHIVE=1000 # Identities dfx identity use default DEFAULT=$(dfx identity get-principal) dfx identity new archive_controller dfx identity use archive_controller ARCHIVE_CONTROLLER=$(dfx identity get-principal) dfx identity new minter dfx identity use minter MINTER=$(dfx identity get-principal) # Switch back to the identity that contains cycles dfx identity use "<YOUR-IDENTITY>" # Create and deploy the token canister dfx canister create myToken --network ic dfx deploy myToken --network ic --argument"(variant {Init = record { token_symbol = \"${TOKEN_SYMBOL}\"; token_name = \"${TOKEN_NAME}\"; minting_account = record { owner = principal \"${MINTER}\" }; transfer_fee = ${TRANSFER_FEE}; metadata = vec {}; feature_flags = opt record{icrc2 = ${FEATURE_FLAGS}}; initial_balances = vec { record { record { owner = principal \"${DEFAULT}\"; }; ${PRE_MINTED_TOKENS}; }; }; archive_options = record { num_blocks_to_archive = ${NUM_OF_BLOCK_TO_ARCHIVE}; trigger_threshold = ${TRIGGER_THRESHOLD}; controller_id = principal \"${ARCHIVE_CONTROLLER}\"; cycles_for_archive_creation = opt ${CYCLE_FOR_ARCHIVE_CREATION}; };} })"
In this script, we define our token's name, symbol, transfer fee, and initial supply. Adjust these settings to match your tokenomics and token information details. For our token, we are premining 100 million tokens.
The script also specifies default settings for the token and sets up identities for minting and archiving.
note
Ensure you switch back to the identity that contains the cycles on your machine before running the commands below.
Once the file saved, run the command below in your terminal to deploy the token canister on the network:
./deploy.sh
If all the previous steps are successful, you should get a link in this format https://a4gq6-oaaaa-aaaab-qaa4q-cai.raw.icp0.io/?id=<TOKEN-CANISTER-ID> where TOKEN-CANISTER-ID is the id of your token ledger that was deployed.
All the premined tokens are now held by the principal address of the default identity. You can transfer these to an external wallet like plug to ease with the transfer process since using the command line to distribute the tokens is a little bit cumbersome.
Learn more about creating token canisters
The next step is to set up a marketing website for your project.
Select no to configure the local development emurator
Select yes to install the dependencies
Select yes to install juno's CLI tool. Juno CLI will help us to deploy our project in the satellite.
Navigate to the project folder myWebsite and open it in your favorite code editor. If every previous step is successfull, running npm run dev in the terminal will open the project in the browser and you should have something similar to this.
In the above code, we created a simple website to display the logo of our token, as well as the name,symbol and total supply of the token. There is also a button that allows the user to but our token from an exchange where it is listed.
Edit the code above to display the information of your token including the name, symbol, total supply, and logo.
Now that we connected our project to the satellite, we can compile and deploy the website.
npm run build
The above command compiles our website and outputs the compiled files in the dist folder
juno deploy
This will deploy our compiled files to the satellite that we connected linked our website to.
At this stage, if all the previous steps are successful, the command will output a link which is in this format https://<SATELLITE_ID>.icp0.io where SATELLITE_ID is the id of the satellite that we connected our project to.
tip
Running juno open in your terminal opens your project in your favorite browser.
Opening the link in the browser, you should have something like this below
In this section, we will look at how to list our newly created token on ICPSwap.
ICPSwap is a decentralized exchange that facilitates token trading and swapping by allowing tokens to be listed and liquidity pools to be created for different token pairs.
And because ICPSwap is a decentralized autonomous organization (DAO) controlled by the community members, you need to submit a proposal for your token to be added on the exchange. This proposal will be voted on by the community members. If the proposal passes, the token will be listed on this exchange.
We will create a proposal to add our token on ICPSwap in the following steps.
Click on te three dots in the right corner and select make proposal
Select MOTION as the proposal type
Add a descriptive title, something like "ADD FROGIE TO THE TOKEN LIST"
In the summary section,add all the details about your token forexample the token canister address, social media handles and any other information you feel will help the voter to understand more about your token
Once your have filled all the fields, click submit and the proposal will be submitted.
NOTE: You will be charged a fee of 50 ICS for this service, therefore ensure you have enough ICS balance before you perform this step.
The voting duration for proposals on the ICPSwap platform is typically three days. If a proposal passes during this voting period, your token will be listed on the exchange and will be tradable.
Once your token is available for trading, you can update the link on the Buy Frogie Now button to redirect the user to the exchange from where they can buy the token.
note
You can also use proposals.network as an alternative to submit a proposal to any SNS project.
If you have reached this step without any errors, congratulations, you have created your first meme coin project. 🥳
Now you can start marketing to attract more users and holders. Good luck! 🤞
The first step to creating a successful meme coin is finding a unique topic that resonates with people. Your concept should be relatable, funny or nostalgic. Capture the essence of internet culture with a catchy name and logo that embodies the humor and appeal of your chosen meme.
Most successful meme coin projects hire specialized crypto influencer marketing teams with extensive networks. Partner with online personalities who like memes or crypto and have them talk about your coin to their followers.
In this article, we have covered everything you need to launch a successful memecoin project, from creating the token canister, to creating a marketing website using Juno and listing the token on ICPSwap.
This article is for educational purposes only and is not financial advice of any form. Do Your Own Research (DYOR) if you want to invest in memecoins.
👋
Stay connected with Juno by following us on Twitter to keep up with our latest updates.
And if you made it this far, we’d love to have you join the Juno community on Discord. 😉
Renaming a route in your web application is a common task, but it’s crucial to handle it correctly to avoid breaking links and negatively impacting your SEO. Redirecting the old route to the new one ensures a seamless user experience and maintains your site's search engine rankings.
In this blog post, we’ll guide you through the steps to set up a redirection after renaming one of your pages.