One of the goals with Juno has always been to make building decentralized, secure apps feel like something you're already used to. No weird mental models. No boilerplate-heavy magic. Just code that does what you expect, without touching infrastructure.
And with this release, we're taking another step in that direction:
You can now write serverless functions in TypeScript.
If you're a JavaScript developer, you can define backend behavior right inside your container. It runs in a secure, isolated environment with access to the same hooks and assertions you'd use in a typical Juno Satellite.
No need to manage infrastructure. No need to deploy a separate service. Just write a function, and Juno takes care of the rest.
Cherry on top: the structure mirrors the Rust implementation, so everything from lifecycle to data handling feels consistent. Switching between the two, or migrating later, is smooth and intuitive.
Rust is still the best choice for performance-heavy apps. That's not changing.
But let's be real: sometimes you just want to ship something quickly. Maybe it's a prototype. Maybe it's a feature you want to test in production. Or maybe you just want to stay in the JavaScript world because it's what you know best.
Now you can.
You get most of the same tools, like:
Hooks that react to document or asset events (onSetDoc, onDeleteAsset, etc.)
Assertions to validate operations (assertSetDoc, etc.)
Utility functions to handle documents, storage, and even call other canisters on ICP
The JavaScript runtime is intentionally lightweight. While it doesn't include full Node.js support, we're adding polyfills gradually based on real-world needs. Things like console.log, TextEncoder, Blob, and even Math.random — already covered.
The approach to writing serverless functions in Rust and TypeScript is aligned by design. That means if you outgrow your TS functions, migrating to Rust won't feel like starting from scratch. The APIs, structure, and flow all carry over.
Alongside TypeScript support, we've rethought the local development experience.
Instead of providing a partial local environment, the mindset shifted to mimicking production as closely as possible.
You still get a self-contained image with your Satellite, but now you also get the full Console UI included. That means you can manage and test your project locally just like you would on mainnet.
Here's the beautiful part: even though your serverless functions are written in TypeScript, they're bundled and embedded into a Satellite module that's still compiled in Rust behind the scenes.
But you don't need to install Rust. Or Cargo. Or ic-wasm. Or anything that feels complicated or overly specific.
All you need is Node.js and Docker. The container takes care of the rest: building, bundling, embedding metadata and gives you a ready-to-run Satellite that runs locally and is ready to deploy to production.
In short: just code your functions. The container does the heavy lifting.
This isn’t just a feature announcement — serverless functions in TypeScript are already live and powering real functionality.
I used them to build the ICP-to-cycles swap on cycles.watch, including all the backend logic and assertions. The whole process was documented over a few livestreams, from setup to deployment.
If you're curious, the code is on GitHub, and there’s a playlist on YouTube if you want to follow along and see how it all came together.
We've put together docs and guides to help you get started. If you're already using the Juno CLI, you're just one juno dev eject away from writing your first function or start fresh with npm create juno@latest.
To infinite and beyond,
David
Stay connected with Juno by following us on X/Twitter.
Until now, running a local project meant spinning up an emulator with just enough to build with a single default Satellite container for your app.
That worked. But it wasn’t the full picture.
With the latest changes, local development now mirrors the production environment much more closely. You don’t just get a simplified setup — you get the actual Console UI, orchestration logic, and almost a full infrastructure that behaves like the real thing.
This shift brings something most cloud serverless platforms don't offer: production-level parity, right on your machine.
Local development isn’t just about getting things to run. It’s about understanding how your project behaves, how it scales, and how it integrates with the platform around it.
With this shift, you build with confidence that what works locally will work in production. You don’t need to guess how things will behave once deployed — you’re already working in an environment that mirrors it closely.
It also helps you gradually get familiar with the tools that matter, like the Console UI. You learn to use the same workflows, patterns, and orchestration logic that apply when your app goes live.
This removes a lot of friction when switching environments. There's less surprise, less debugging, and a lot more flow.
It’s local development, but it finally feels like the real thing.
That’s why the lightweight junobuild/satellite image still exists — and still works just as it always has. It’s ideal for CI pipelines, isolated app testing, or local startup when you don’t need the Console and more infrastructure.
This shift in approach isn’t a breaking change. It adds a new default, but doesn’t remove what was already there.
Looking ahead, there's an intention to simplify scripting even further by allowing Datastore and Storage definitions directly in the main juno.config file. The goal is to eventually phase out juno.dev.config and unify configuration — but that’s for the future.
For now, everything remains compatible. You choose what fits best.
If you already have a project configured for local development and want to switch to the new approach:
Update the CLI:
npm i -g @junobuild/cli
Remove your juno.dev.config.ts (or the JavaScript or JSON equivalent)
Update your docker-compose.yml to use the junobuild/skylab image (adjust paths as needed for your project):
services: juno-skylab: image: junobuild/skylab:latest ports: # Local replica used to simulate execution - 5987:5987 # Little admin server (e.g. to transfer ICP from the ledger) - 5999:5999 # Console UI (like https://console.juno.build) - 5866:5866 volumes: # Persistent volume to store internal state - juno_skylab:/juno/.juno # Your Juno configuration file. # Notably used to provide your development Satellite ID to the emulator. - ./juno.config.mjs:/juno/juno.config.mjs # Shared folder for deploying and hot-reloading serverless functions # For example, when building functions in TypeScript, the output `.mjs` files are placed here. # The container then bundles them into your Satellite WASM (also placed here), # and automatically upgrades the environment. - ./target/deploy:/juno/target/deploy/ volumes: juno_skylab:
That’s it — you’re good to go.
✅ Closing Thoughts
This shift removes a lot of friction between idea and execution.
You build in the same structure, use the same tools, and follow the same workflows you'd use in production — but locally, and instantly.
Local development finally feels like you're already in production, just without the pressure.
Stay connected with Juno by following us on X/Twitter.
Why Data Validation Matters in Decentralized Apps
Data validation is always important. However, web3 comes with its own set of challenges which makes validation an even more important part of building trustworthy apps:
No Central Administrator: Unlike traditional systems, decentralized apps have no admin backdoor to fix data issues
Limited Data Access: Developers often can't directly access or examine user data due to encryption and/or privacy
Data Immutability: Once written to the blockchain, data can be difficult or impossible to modify
Client-Side Vulnerability: Front-end validation can be bypassed by determined users (like in web2)
Security Risks: Invalid or malicious data can compromise application integrity and user trust
Getting validation right from the start is not just a best practice—it's essential for the secure and reliable operation of your application.
on_set_doc is a Hook that is triggered after a document has been written to the database. It offers a way to execute custom logic whenever data is added or updated to a collection using the setDoc function executed on the client side.
This allows for many use-cases, even for certain types of validation, but this hook runs after the data has already been written.
// Example of validation and cleanup in on_set_doc #[on_set_doc(collections =["users"])] fnon_set_doc(context:OnSetDocContext)->Result<(),String>{ // Step 1: Get all context data we'll need upfront let collection = context.data.collection; let key = context.data.key; let doc =&context.data.data.after;// Reference to the full document after update let user_data:UserData=decode_doc_data(&doc.data)?;// Decoded custom data from the document // Step 2: Validate the data if user_data.username.len()<3{ // Step 3: If validation fails, delete the document using low-level store function delete_doc_store( ic_cdk::id(),// Use Satellite's Principal ID since this is a system operation collection, key, DelDoc{ version:Some(doc.version),// Use the version from our doc reference } )?; // Log the error instead of returning it to avoid trapping ic_cdk::print("Username must be at least 3 characters"); } Ok(()) }
Issues:
The on_set_doc hook only executes AFTER data is already written to the database, which is not ideal for validation.
Since it only happens after the data is already written, it can lead to unwanted effects. For example: let's say a new data needs to be added to some list. If it is invalid, we can't add it to the list, but since the hook runs after the data is written, the data will be added to the list anyway before we can reject them. This adds unwanted complexity to your code, forcing the developer to manage multiple on_set_doc hooks in the same function.
Overhead: invalid data is written (costly operation) then might be rejected and need to be deleted (another costly operation)
Not ideal for validation since it can't prevent invalid writes
Can't return success/error messages to the frontend
There are also other Juno hooks, but in general, they provide a way to execute custom logic whenever data is added, modified, or deleted from a Juno datastore collection.
Custom Endpoints are Juno serverless functions that expose new API endpoints through Candid (the Internet Computer's interface description language). They provide a validation layer through custom API routes before data reaches Juno's datastore, allowing for complex multi-step operations with custom validation logic.
caution
This example is provided as-is and is intended for demonstration purposes only. It does not include comprehensive security validations.
usejunobuild_satellite::{set_doc_store,SetDoc};// SetDoc is the struct type for document creation/updates usejunobuild_utils::encode_doc_data; useic_cdk::caller; usecandid::{CandidType,Deserialize}; // Simple user data structure #[derive(CandidType, Deserialize)] structUserData{ username:String, } // Custom endpoint for user creation with basic validation #[ic_cdk_macros::update] asyncfncreate_user(key:String, user_data:UserData)->Result<(),String>{ // Step 1: Validate username (only alphanumeric characters) if!user_data.username.chars().all(|c| c.is_alphanumeric()){ returnErr("Username must contain only letters and numbers".to_string()); } // Step 2: Create and store document // First encode our data into a blob that Juno can store into the 'data' field let encoded_data =encode_doc_data(&user_data) .map_err(|e|format!("Failed to encode user data: {}", e))?; // Create a SetDoc instance - this is the required format for setting documents in Juno // SetDoc contains only what we want to store - Juno handles all metadata: // - created_at/updated_at timestamps // - owner (based on caller's Principal) // - version management let doc =SetDoc{ data: encoded_data,// The actual data we want to store (as encoded blob) description:None,// Optional field for filtering/searching version:None// None for new docs, Some(version) for updates }; // Use set_doc_store to save the document // This is Juno's low-level storage function that: // 1. Takes ownership of the document (caller's Principal) // 2. Adds timestamps (created_at, updated_at) // 3. Handles versioning // 4. Stores the document in the specified collection set_doc_store( caller(),// Who is creating this document String::from("users"),// Which collection to store in key,// The document's unique key doc // The SetDoc we prepared above ).await }
While custom endpoints offer great flexibility for building specialized workflows, they introduce important security considerations. A key issue is that the original setDoc endpoint remains accessible — meaning users can, to some extension, still bypass your custom validation logic by calling the standard Juno SDK methods directly from the frontend. As a result, even if you've added strict validation in your custom endpoints, the underlying collection can still be modified unless you take additional steps to restrict access.
The common workaround is to restrict the datastore collection to "controller" access so the public can't write to it directly, forcing users to interact only through your custom functions. However, this approach creates its own problems:
All documents will now be "owned" by the controller, not individual users
You lose Juno's built-in permission system for user-specific data access
You'll need to build an entirely new permission system from scratch
This creates a complex, error-prone "hacky workaround" instead of using Juno as designed
Key Limitations:
Original setDoc endpoint remains accessible to users
Users can bypass custom endpoint entirely by using Juno's default endpoints directly (setDoc, setDocs, etc)
Restricting collections to controller access breaks Juno's permission model
Requires building a custom permission system from scratch
The assert_set_doc hook runs BEFORE any data is written to the database, allowing you to validate and reject invalid submissions immediately. This is the most secure validation method in Juno as it integrates directly with the core data storage mechanism.
When a user calls setDoc through the Juno SDK, the assert_set_doc hook is automatically triggered before any data is written to the blockchain. If your validation logic returns an error, the entire operation is cancelled and any changes are rolled back, and the error is returned to the frontend. This ensures invalid data never reaches your datastore in the first place, saving computational resources and maintaining data integrity.
Unlike other approaches, assert_set_doc hooks:
Cannot be bypassed by end users
Integrate seamlessly with Juno's permission model
Allow users to continue using the standard Juno SDK
Keep validation logic directly in your data model
Conserve blockchain resources by validating before storage
Can reject invalid data with descriptive error messages that flow back to the frontend (unlike on_set_doc which runs after storage and can't return validation errors to users)
// Simple assert_set_doc example #[assert_set_doc(collections =["users"])] fnassert_set_doc(context:AssertSetDocContext)->Result<(),String>{ match context.data.collection.as_str(){ "users"=>{ // Access username from the document let data = context.data.data.proposed.data.as_object() .ok_or("Invalid data format")?; let username = data.get("username") .and_then(|v| v.as_str()) .ok_or("Username is required")?; // Validate username if username.len()<3{ returnErr("Username must be at least 3 characters".to_string()); } Ok(()) }, _ =>Ok(()) } }
Key Advantages:
Always runs BEFORE data is written - prevents invalid data entirely
Zero overhead - validation happens in memory before expensive on-chain operations
Cannot be bypassed or circumvented
Prevents invalid data from ever being written
Conserves resources by validating before storage
Integrates directly with Juno's permission model
Keeps validation (assert_set_doc) separate from business logic triggers (on_set_doc)
usejunobuild_satellite::{ set_doc, list_docs, decode_doc_data, encode_doc_data, Document,ListParams,ListMatcher }; useic_cdk::api::time; usestd::collections::HashMap; #[assert_set_doc(collections =["users","votes","tags"])] fnassert_set_doc(context:AssertSetDocContext)->Result<(),String>{ match context.data.collection.as_str(){ "users"=>validate_user_document(&context), "votes"=>validate_vote_document(&context), "tags"=>validate_tag_document(&context), _ =>Err(format!("Unknown collection: {}", context.data.collection)) } } fnvalidate_user_document(context:&AssertSetDocContext)->Result<(),String>{ // Decode and validate the user data structure let user_data:UserData=decode_doc_data(&context.data.data.proposed.data) .map_err(|e|format!("Invalid user data format: {}", e))?; // Validate username format (3-20 chars, alphanumeric + limited symbols) if!is_valid_username(&user_data.username){ returnErr("Username must be 3-20 characters and contain only letters, numbers, and underscores".to_string()); } // Check username uniqueness by searching existing documents let search_pattern =format!("username={};", user_data.username.to_lowercase()); let existing_users =list_docs( String::from("users"), ListParams{ matcher:Some(ListMatcher{ description:Some(search_pattern), ..Default::default() }), ..Default::default() }, ); // If this is an update operation, exclude the current document let is_update = context.data.data.before.is_some(); for(doc_key, _)in existing_users.items { if is_update && doc_key == context.data.key { continue; } returnErr(format!("Username '{}' is already taken", user_data.username)); } Ok(()) } fnvalidate_vote_document(context:&AssertSetDocContext)->Result<(),String>{ // Decode vote data let vote_data:VoteData=decode_doc_data(&context.data.data.proposed.data) .map_err(|e|format!("Invalid vote data format: {}", e))?; // Validate vote value constraints if vote_data.value <-1.0|| vote_data.value >1.0{ returnErr(format!("Vote value must be -1, 0, or 1 (got: {})", vote_data.value)); } // Validate vote weight constraints if vote_data.weight <0.0|| vote_data.weight >1.0{ returnErr(format!("Vote weight must be between 0.0 and 1.0 (got: {})", vote_data.weight)); } // Validate tag exists let tag_params =ListParams{ matcher:Some(ListMatcher{ key:Some(vote_data.tag_key.clone()), ..Default::default() }), ..Default::default() }; let existing_tags =list_docs(String::from("tags"), tag_params); if existing_tags.items.is_empty(){ returnErr(format!("Tag not found: {}", vote_data.tag_key)); } // Prevent self-voting if vote_data.author_key == vote_data.target_key { returnErr("Users cannot vote on themselves".to_string()); } Ok(()) } fnvalidate_tag_document(context:&AssertSetDocContext)->Result<(),String>{ // Decode tag data let tag_data:TagData=decode_doc_data(&context.data.data.proposed.data) .map_err(|e|format!("Invalid tag data format: {}", e))?; // Validate tag name format and uniqueness if!is_valid_tag_name(&tag_data.name){ returnErr("Tag name must be 3-50 characters and contain only letters, numbers, and underscores".to_string()); } // Check tag name uniqueness let search_pattern =format!("name={};", tag_data.name.to_lowercase()); let existing_tags =list_docs( String::from("tags"), ListParams{ matcher:Some(ListMatcher{ description:Some(search_pattern), ..Default::default() }), ..Default::default() }, ); let is_update = context.data.data.before.is_some(); for(doc_key, _)in existing_tags.items { if is_update && doc_key == context.data.key { continue; } returnErr(format!("Tag name '{}' is already taken", tag_data.name)); } // Validate description length if tag_data.description.len()>1024{ returnErr(format!( "Tag description cannot exceed 1024 characters (current length: {})", tag_data.description.len() )); } // Validate time periods validate_time_periods(&tag_data.time_periods)?; // Validate vote reward if tag_data.vote_reward <0.0|| tag_data.vote_reward >1.0{ returnErr(format!( "Vote reward must be between 0.0 and 1.0 (got: {})", tag_data.vote_reward )); } Ok(()) } fnvalidate_time_periods(periods:&[TimePeriod])->Result<(),String>{ if periods.is_empty(){ returnErr("Tag must have at least 1 time period".to_string()); } if periods.len()>10{ returnErr(format!( "Tag cannot have more than 10 time periods (got: {})", periods.len() )); } // Last period must be "infinity" (999 months) let last_period = periods.last().unwrap(); if last_period.months !=999{ returnErr(format!( "Last period must be 999 months (got: {})", last_period.months )); } // Validate each period's configuration for(i, period)in periods.iter().enumerate(){ // Validate multiplier range (0.05 to 10.0) if period.multiplier <0.05|| period.multiplier >10.0{ returnErr(format!( "Multiplier for period {} must be between 0.05 and 10.0 (got: {})", i +1, period.multiplier )); } // Validate multiplier step increments (0.05) let multiplier_int =(period.multiplier *100.0).round(); let remainder = multiplier_int %5.0; if remainder >0.000001{ returnErr(format!( "Multiplier for period {} must use 0.05 step increments (got: {})", i +1, period.multiplier )); } // Validate month duration if period.months ==0{ returnErr(format!( "Months for period {} must be greater than 0 (got: {})", i +1, period.months )); } } Ok(()) }
Remember: Security is about preventing unauthorized or invalid operations, not just making them difficult. assert_set_doc hooks provide the only guaranteed way to validate all data operations in Juno's Datastore.
✍️ This blog post was contributed by Fairtale, creators of Solutio.
Solutio is a new kind of platform where users crowdfund the software they need, and developers earn by building it. Instead of waiting for maintainers or hiring devs alone, communities can come together to fund bug fixes, new features, or even entire tools — paying only when the result meets their expectations.
November’s been an exciting month, especially since I’ve officially started working full-time on Juno — thanks to the recently announced funding! This shift has already led to delivering some fantastic new features for developers, like automated backups (finally!!!), support for large WASM modules, the ability to buy cycles with Stripe, and a few other goodies.
These updates are all about making development smoother and more efficient, whether you’re building dapps, smart contracts, or managing your projects. Let’s dive into what’s new!
To kick things off, I’d like to highlight the introduction of backups—a feature I’ve been waiting for forever!
This addition brings a crucial layer of security for developers, letting you safeguard your modules and restore them whenever needed.
Here’s how it works: Currently, one backup per module is supported. You can manage backups manually via both the Console UI and the CLI, with options to create, restore, or delete them. Additionally, backups are automatically created during the upgrade process, taking a snapshot before transitioning to a new version. For those who prefer full control, advanced options let you skip creating a backup or avoid overwriting an existing one.
For anyone who, like me, feels a bit tense whenever it’s time to execute an upgrade, this feature is a huge relief. It’s really a great addition!
Getting cycles has become more straightforward, particularly for newcomers and non-crypto-native users, with the ability to buy cycles directly through Stripe, thanks to our friends at cycle.express.
With this integration, developers can simply make a payment, and the cycles are added directly to their module.
This was both a useful feature, as it makes it easy to transfer ICP from OISY to the developer's wallet on Juno, and an opportunity for me to try out the integration with various ICRC standards I implemented for the foundation.
I also used the opportunity to improve the UI/UX of the Receive feature by displaying wallet addresses with a QR code. This update wraps up a few related tasks, such as adding support for sending ICP to the outside world.
Support for larger WASM modules (over 2MB) has been added. While none of Juno's stock modules—such as Satellites, Mission Control, or Orbiter (Analytics)—come close to this size when gzipped, this limit could quickly be reached by developers using serverless functions.
By extending this limit, developers have more flexibility to embed additional third-party libraries and expand their module capabilities.
This support has been implemented across the CLI, the Console UI, and even local development environments using Docker, ensuring a consistent experience for all workflows.
Until recently, new Satellites launched lacked a default page for web hosting. This meant that developers opening their project right after creation would just see a blank page in the browser.
That’s why every new Satellite now comes with a sleek, informative default web page—delivering a great first impression right out of the box! ✨
Another handy tool introduced this month is support for pre- and post-deploy scripts in the CLI. With this feature, developers can now define a list of commands to be executed at specific stages of the deployment process.
The pre-deploy scripts are perfect for automating tasks like:
Compiling assets.
Running tests or linters.
Preparing production-ready files.
Likewise, post-deploy scripts come in handy for follow-up tasks, such as:
Sending notifications or alerts to administrators.
Cleaning up temporary files.
Logging deployment information for auditing.
import{ defineConfig }from"@junobuild/config"; /** @type{import('@junobuild/config').JunoConfig} */ exportdefaultdefineConfig({ satellite:{ id:"ck4tp-aaaaa-aaaaa-abbbb-cai", source:"build", predeploy:["npm run lint","npm run build"], postdeploy:["node hello.mjs"] } });
Maybe not the most groundbreaking update, but the dark theme got even darker. 🧛♂️🦇 Perfect for those late-night coding sessions—or if you just enjoy the vibe!
Another area that saw improvement is the documentation. I aimed to make it more intuitive and useful for both newcomers and experienced developers. That’s why I revamped the guides section. Now, when you visit, you’ll be greeted with a simple question: “What are you looking to do? Build or Host?” 🎯. This approach should hopefully make onboarding smoother and more straightforward for developers.
The CLI documentation also received an upgrade. Updating it manually was a hassle, so I automated the process. Now, CLI help commands generate markdown files that are automatically embedded into the website every week. No more manual updates for me, and it’s always up to date for you! 😄
I also dedicated time to documenting all the configuration options in detail, ensuring every setting is clearly explained.
And as a finishing touch, I refreshed the landing page. 👨🎨
I hope these features get you as excited as they got me! I’m already looking forward to what’s next. Speak soon for more updates!
David
Stay connected with Juno by following us on X/Twitter.
As you may know, I recently proposed transforming Juno into a Decentralized Autonomous Organization through an SNS swap. Unfortunately, it didn’t reach its funding goal, so Juno didn’t become a DAO.
After the failure, three options came to mind: retrying the ICO with a lower target, continuing to hack as an indie project for a while, or simply killing it.
In the days that followed, I also received a few other options, including interest from venture capitalists for potential seed funding which wasn’t an option for me.
Then, something unexpected happened:
The DFINITY foundation’s CTO, Jan Camenisch, reached out and proposed an alternative: funding the project through 2025.
I took a few days to consider the offer and ultimately accepted.
This support is a tremendous vote of confidence in Juno’s potential and importance within the ecosystem.
It’s worth emphasizing that the foundation’s support comes with no strings attached. They do not receive any stake in Juno, have no preferential treatment, and will not influence decisions. Should I ever consider another SNS DAO or any other funding route in the future, the foundation would have no special allocation or shares. This remains my project, and I am the sole decision-maker and controller.
This support also strengthens the relationship between Juno and the foundation, allowing us to stay in close contact to discuss the roadmap. It’s an arrangement that respects autonomy while fostering collaboration to advance the Internet Computer. As they say, it takes two to tango.
This funding opens up a world of possibilities and marks the first time I’ll work 100% on a project I created. I’m thrilled to continue building Juno as a resource that makes decentralized development accessible and impactful for everyone.
Obviously, while Juno remains under my sole ownership for now, I still believe that Juno should eventually become a DAO. Promoting full control for developers while retaining centralized ownership would be paradoxical. When the time is right, a DAO will ensure that Juno’s growth, security, and transparency are upheld through community-driven governance.
Thank you to everyone who believed in Juno through the SNS campaign and beyond 🙏💙. Your support has been invaluable, and this new phase wouldn’t be possible without you. Here’s to what lies ahead—a new chapter indeed.
To infinity and beyond,
David
Stay connected with Juno by following us on X/Twitter.
The SNS DAO on the Internet Computer failed on Saturday, October 12, 2024 (ICP swap data). As a result, Juno did not become a Decentralized Autonomous Organization (DAO).
Hey everyone 👋
I hope you’re all doing well! I’m excited to share some big news with you today. Over the next few weeks, we’re taking some significant steps toward shaping the future of Juno, and I wanted to keep you in the loop about what’s coming.
As you may know, Juno is a blockchain-as-a-service ecosystem that empowers developers to build decentralized apps efficiently. One of its strengths is that it gives developers full and sole control over their work. For this reason, it would be paradoxical to continue operating the platform with a centralized model—i.e., with me being the sole controller of services, such as the administration console or our CDN. That’s why I’m thrilled to unveil that, in the upcoming weeks, I’m aiming to fix this bias by proposing that Juno becomes a Decentralized Autonomous Organization (DAO).
While this potential shift is on the horizon, there are a few key steps you can take to stay informed and involved in the process. Here’s how you can help shape the future of developing on Web3:
To ensure you don’t miss any crucial updates, I encourage you to sign up for our newsletter. The journey to proposing a DAO and making it a reality involves multiple steps, each requiring your participation. By signing up, you’ll receive timely notifications whenever there’s an opportunity to get involved and make a real impact.
The white paper has been updated to continue presenting the vision; however, the tokenomics aspect has been notably removed, as it is no longer relevant following the failure of the SNS DAO.
I’ve put together a white paper that outlines the reasoning and vision I have for a Juno Build DAO. I highly recommend giving it a read to fully understand what I’m aiming to achieve.
Questions are always welcome at any time, but if you’re looking to engage directly, I’ll be hosting a Juno Live session on 9 September at 3:00 PM CET. Join the livestream on YouTube to interact in real-time.
The proposal was approved and executed on September 26, 2024.
While I typically avoid relying on third parties for core features, transforming Juno into a DAO without leveraging such services would be an immense task. That’s why I’m proposing to use the Internet Computer’s built-in solution for creating and maintaining DAOs, known as SNS.
To kickstart the process of transforming our ecosystem, this framework requires submitting a proposal to the Internet Computer’s governance, known as NNS. This step ensures a decentralized and democratic process. It also prepares for the handover of control of the smart contracts and allows all participants to review the parameters involved.
Once this proposal is live, your voice will be crucial! You’ll have the opportunity to vote on whether to accept or reject it.
Please note that the following does not constitute financial advice.
If the proposal is approved, an initial decentralization swap will be kicked off. The goal here is to raise the initial funds for the DAO and to decentralize the voting power. Think of it like crowdfunding, where people contribute ICP tokens. In return, participants are rewarded with staked tokens, giving them a share of the DAO's voting power.
For the swap to succeed, it requires at least 100 participants and 200,000 ICP tokens. Otherwise, the entire journey of transforming Juno into a DAO fails. So, if you’re excited about being part of this adventure, this could be the step where you make a real difference — if you decide on your own will to do so.
If the swap fails, it will mark the beginning of the end. While the platform won’t be deprecated immediately, I will gradually phase it out over the course of, let's say, a year. During this time, Juno will stop accepting new developers, and I will no longer develop new features, promote the eco-system, or invest in it beyond maintenance.
For those already using Juno, I want to reassure you that I won’t leave you stranded. I’m committed to offering support to help you transition and find suitable alternatives. I’m not, I hope, that much of an a-hole. I try to maintain good karma.
On a personal note, I would also be deprecating all of my personal projects, such as proposals.network, as I have no intention of using any developer tooling other than Juno for my own Web3 projects.
If the swap is successful, hooray! Juno will officially become a DAO, allowing you to actively participate in the governance of the project and start using the new JUNOBUILD token, among other exciting developments.
This will mark the beginning of an exciting new chapter, with the community at the heart of Juno's future.
To infinity and beyond,
David
Useful Links:
Juno White Paper - Understand the vision and details behind the proposed DAO.
Memecoins are starting to gain significant traction. Some of these tokens, such as Windowge98, Damonic Welleams, Wumbo, Spellkaster and $stik, have reached high prices and attracted many retail investors into the ecosystem. Now, you may be wondering how these meme tokens were launched. In this article, we will walk you through all the steps you need to follow in order to create your own memecoin project.
From creating the token smart contract (canister) to building a marketing website using Juno, and finally launching the token on ICPSwap, a major decentralized exchange (DEX) on ICP, we've got you covered.
We will also provide useful tips to ensure your memecoin project is successful. By the end of this article, you will have all the information needed to launch your token.
important
This article is for educational purposes only and is not financial advice of any form.
The Internet Computer (ICP) is a blockchain-based platform that aims to create a new type of internet, one that is decentralized, secure, and scalable. Developed, among others, by the DFINITY Foundation, the Internet Computer is designed to serve as a global public compute infrastructure, allowing developers to build and deploy decentralized applications (dApps) and services directly on the blockchain. Learn more about ICP
Juno is a blockchain-as-a-service (“blockchainless”) platform that empowers developers to build decentralized apps efficiently. Similar to Web2 cloud service platforms but with significant improvements, it offers a comprehensive toolkit to scaffold secure and efficient projects running on the blockchain.
In short, Juno is the Google Firebase alternative for Web3.
There are simpler ways to launch your own token that do not involve scripting, such as using no-code platforms like ICTO, ICPEx or ICPI.
However, since Juno is dedicated to providing developers with full ownership without compromise, this tutorial showcases an approach that aligns with our core values.
If you prefer to use one of those services, that's cool. Some of those also share these values; we just suggest you do your own research before making a decision.
And who knows, maybe in the future, Juno itself will make launching ledgers to the moon easy too! 😉
To deploy a ledger for your token proceed as following:
Make sure you have the dfx CLI installed on your machine. If not, follow this guide to complete the installation.
Creating a canister requires cycles, which measure and pay for resources like memory, storage, and compute power. Follow this guide to load cycles on your machine for deploying your ledger.
The following steps assume that you have cycles on your machine
On your computer, make an empty folder and name it myToken, and open it in your favorite editor
Create a file inside the folder and name it dfx.jsonpaste the code below
Next, we are going to define some parameters for our token and prepare the script for deployment.
Create a new file named deploy.sh and paste the following code:
#!/usr/bin/env bash # Token settings TOKEN_NAME="FROGIE" TOKEN_SYMBOL="FRG" TRANSFER_FEE=10000 PRE_MINTED_TOKENS=100_000_000_00_000_000 FEATURE_FLAGS=true TRIGGER_THRESHOLD=2000 CYCLE_FOR_ARCHIVE_CREATION=10000000000000 NUM_OF_BLOCK_TO_ARCHIVE=1000 # Identities dfx identity use default DEFAULT=$(dfx identity get-principal) dfx identity new archive_controller dfx identity use archive_controller ARCHIVE_CONTROLLER=$(dfx identity get-principal) dfx identity new minter dfx identity use minter MINTER=$(dfx identity get-principal) # Switch back to the identity that contains cycles dfx identity use "<YOUR-IDENTITY>" # Create and deploy the token canister dfx canister create myToken --network ic dfx deploy myToken --network ic --argument"(variant {Init = record { token_symbol = \"${TOKEN_SYMBOL}\"; token_name = \"${TOKEN_NAME}\"; minting_account = record { owner = principal \"${MINTER}\" }; transfer_fee = ${TRANSFER_FEE}; metadata = vec {}; feature_flags = opt record{icrc2 = ${FEATURE_FLAGS}}; initial_balances = vec { record { record { owner = principal \"${DEFAULT}\"; }; ${PRE_MINTED_TOKENS}; }; }; archive_options = record { num_blocks_to_archive = ${NUM_OF_BLOCK_TO_ARCHIVE}; trigger_threshold = ${TRIGGER_THRESHOLD}; controller_id = principal \"${ARCHIVE_CONTROLLER}\"; cycles_for_archive_creation = opt ${CYCLE_FOR_ARCHIVE_CREATION}; };} })"
In this script, we define our token's name, symbol, transfer fee, and initial supply. Adjust these settings to match your tokenomics and token information details. For our token, we are premining 100 million tokens.
The script also specifies default settings for the token and sets up identities for minting and archiving.
note
Ensure you switch back to the identity that contains the cycles on your machine before running the commands below.
Once the file saved, run the command below in your terminal to deploy the token canister on the network:
./deploy.sh
If all the previous steps are successful, you should get a link in this format https://a4gq6-oaaaa-aaaab-qaa4q-cai.raw.icp0.io/?id=<TOKEN-CANISTER-ID> where TOKEN-CANISTER-ID is the id of your token ledger that was deployed.
All the premined tokens are now held by the principal address of the default identity. You can transfer these to an external wallet like plug to ease with the transfer process since using the command line to distribute the tokens is a little bit cumbersome.
Learn more about creating token canisters
The next step is to set up a marketing website for your project.
Select no to configure the local development emurator
Select yes to install the dependencies
Select yes to install juno's CLI tool. Juno CLI will help us to deploy our project in the satellite.
Navigate to the project folder myWebsite and open it in your favorite code editor. If every previous step is successfull, running npm run dev in the terminal will open the project in the browser and you should have something similar to this.
In the above code, we created a simple website to display the logo of our token, as well as the name,symbol and total supply of the token. There is also a button that allows the user to but our token from an exchange where it is listed.
Edit the code above to display the information of your token including the name, symbol, total supply, and logo.
Now that we connected our project to the satellite, we can compile and deploy the website.
npm run build
The above command compiles our website and outputs the compiled files in the dist folder
juno deploy
This will deploy our compiled files to the satellite that we connected linked our website to.
At this stage, if all the previous steps are successful, the command will output a link which is in this format https://<SATELLITE_ID>.icp0.io where SATELLITE_ID is the id of the satellite that we connected our project to.
tip
Running juno open in your terminal opens your project in your favorite browser.
Opening the link in the browser, you should have something like this below
In this section, we will look at how to list our newly created token on ICPSwap.
ICPSwap is a decentralized exchange that facilitates token trading and swapping by allowing tokens to be listed and liquidity pools to be created for different token pairs.
And because ICPSwap is a decentralized autonomous organization (DAO) controlled by the community members, you need to submit a proposal for your token to be added on the exchange. This proposal will be voted on by the community members. If the proposal passes, the token will be listed on this exchange.
We will create a proposal to add our token on ICPSwap in the following steps.
Click on te three dots in the right corner and select make proposal
Select MOTION as the proposal type
Add a descriptive title, something like "ADD FROGIE TO THE TOKEN LIST"
In the summary section,add all the details about your token forexample the token canister address, social media handles and any other information you feel will help the voter to understand more about your token
Once your have filled all the fields, click submit and the proposal will be submitted.
NOTE: You will be charged a fee of 50 ICS for this service, therefore ensure you have enough ICS balance before you perform this step.
The voting duration for proposals on the ICPSwap platform is typically three days. If a proposal passes during this voting period, your token will be listed on the exchange and will be tradable.
Once your token is available for trading, you can update the link on the Buy Frogie Now button to redirect the user to the exchange from where they can buy the token.
note
You can also use proposals.network as an alternative to submit a proposal to any SNS project.
If you have reached this step without any errors, congratulations, you have created your first meme coin project. 🥳
Now you can start marketing to attract more users and holders. Good luck! 🤞
The first step to creating a successful meme coin is finding a unique topic that resonates with people. Your concept should be relatable, funny or nostalgic. Capture the essence of internet culture with a catchy name and logo that embodies the humor and appeal of your chosen meme.
Most successful meme coin projects hire specialized crypto influencer marketing teams with extensive networks. Partner with online personalities who like memes or crypto and have them talk about your coin to their followers.
In this article, we have covered everything you need to launch a successful memecoin project, from creating the token canister, to creating a marketing website using Juno and listing the token on ICPSwap.
This article is for educational purposes only and is not financial advice of any form. Do Your Own Research (DYOR) if you want to invest in memecoins.
👋
Stay connected with Juno by following us on Twitter to keep up with our latest updates.
And if you made it this far, we’d love to have you join the Juno community on Discord. 😉
Renaming a route in your web application is a common task, but it’s crucial to handle it correctly to avoid breaking links and negatively impacting your SEO. Redirecting the old route to the new one ensures a seamless user experience and maintains your site's search engine rankings.
In this blog post, we’ll guide you through the steps to set up a redirection after renaming one of your pages.
Beyond cryptocurrencies, Blockchain technology offers tools to build secure, transparent applications fully controlled by the user. Building a blog website on the blockchain allows the user to establish a censorship resistant space where they retain full ownership of their content and data.
In this article, we will look at how to create and host your blog website on the blockchain using Juno. Juno is an open-source Blockchain-as-a-service platform that offers a fully decentralized and secure infrastructure for your applications. This article will cover setting up a boilerplate project, configuring the hosting, developing the code for your blog and deploying the project on the blockchain using some of Juno's super powers.
By the end of this article, you will have an understanding of how Juno works, how to host your websites on the blockchain and how to automate the different tasks using Github Actions.
Juno works just like traditional serverless platforms such as Google Firebase or AWS Amplify, but with one key difference: everything on Juno runs on the blockchain. This means that you get a fully decentralized and secure infrastructure for your applications, which is pretty cool if you ask me.
Behind the scenes, Juno uses the Internet Computer blockchain network and infrastructure to launch what we call a “Satellite” for each project you build. A Satellite is essentially a smart contract on steroids that contains your entire app. From its assets provided on the web (such as JavaScript, HTML, and image files) to its state saved in a super simple database, file storage, and authentication, each Satellite controlled solely by you contains everything it needs to run smoothly.
The Internet Computer (ICP) is a blockchain-based platform that aims to create a new type of internet, one that is decentralized, secure, and scalable. Developed, among others, by the DFINITY Foundation, the Internet Computer is designed to serve as a global public compute infrastructure, allowing developers to build and deploy decentralized applications (dApps) and services directly on the blockchain.
Unlike traditional blockchains, the Internet Computer uses a unique consensus mechanism called Threshold Relay, which allows it to achieve high transaction throughput and low latency. The platform is also designed to be highly scalable, with the ability to add more nodes and increase its computing power as demand grows. This makes the Internet Computer a promising platform for building a wide range of decentralized applications, from social media and e-commerce to finance and cloud computing. Learn more about ICP
This is a secure and decentralized blog website. The frontend is build with Astro, which is a modern, flexible web framework focused on building fast, content-rich websites with minimal JavaScript. Here is what you will build by the end of thi article:
Select no to configure the local development emurator
Select yes to install the dependencies
Select yes to install juno's CLI tool. Juno CLI will help us to deploy our project in the satellite.
Navigate to the project folder myBlog and open it in your favorite code editor.
If every previous step is successfull, running npm run dev will open the project in your browser and you should have something similar to this.
The above code displays a navbar that has three tabs Home,Articles, and About. It also displays information about the different articles from our sample article data.
In the components folder, create a new file and name it blogPosts.json. Paste the code below
[ { "title":"Introduction to Astro", "image":"https://juno.build/img/cloud.svg", "description":"Astro is a new static site generator that makes it easy to build fast, content-focused websites.", "url":"https://docs.astro.build/en/getting-started/" }, { "title":"Tailwind CSS: A Utility-First CSS Framework", "image":"https://juno.build/img/launch.png", "description":"Tailwind CSS is a utility-first CSS framework that makes it easy to build responsive and customizable user interfaces.", "url":"https://tailwindcss.com/docs/installation" }, { "title":"The Benefits of Static Site Generation", "image":"https://juno.build/img/moon.svg", "description":"Static site generation offers several benefits, including improved performance, better security, and easier deployment.", "url":"https://www.netlify.com/blog/2016/05/02/top-ten-reasons-the-static-website-is-back/" }, { "title":"Introduction to Astro", "image":"https://juno.build/img/illustration.svg", "description":"Astro is a new static site generator that makes it easy to build fast, content-focused websites.", "url":"https://docs.astro.build/en/getting-started/" } ]
This file holds our sample article data that we are using for this project.
If all the above steps are successfull, your project should look like this in the browser
To keep the satellite operational, the developer pays a small fee that is used to purchase the necessary cycles for the satellite's storage and computation requirements. Learn more about pricing
We need to link our project to the satellite. follow the steps below
In the project terminal, run the command juno init and follow the prompts
Select yes to login and authorize the terminal to access your satellite in your browser
Select myBlogSatellite as the satellite to connect the project to
Select dist as the location of the compiled app files
Select TypeScript as the configuration file format.
If the above step is successful, a new file juno.config.ts
will be added at the root of our project folder. It contains the configuration necessary for our poject to connect to the satellite. You need this file if your project is to be deployed successfully to the satellite. Learn more about this configuration
Now that we connected our project to the satellite, we have to compile and deploy project to the satellite
npm run build
The above command compiles our project and outputs the compiled files in the dist folder
juno deploy
This will deploy our compiled files to the satellite that we connected linked our project to.
At this stage, if all the previous steps are successful, juno deploy command will output a link whixh is in this format https://<SATELLITE_ID>.icp0.io where SATELLITE_ID is the id of the satellite that we connected our project to.
tip
Running juno open in your terminal opens your project in your favorite browser.
Opening the link in the browser, you should have something like this below
If you have reached this step, well done, you have successfully deployed your first blog website on the blockchain using Juno.
If you noticed in the previous steps, every time we make changes to our project, we have to manually run the commands that compile and deploy our code to the satellite. But in this section, we will learn how to automate these tasks using Gihtub Actions so that whenever we make changes to our project, these changes are automatically deployed to oour satellite
In our project, we have a folder .github which contains the file deploy.yml. This file has all the configurations required to setup Github Actions in our project. This folder must be present in your poject to successfully setup Github Actions. You can add it manually if you dont have it in your project. Below are the contents of the deploy.yaml file
name: Deploy to Juno on: push: branches:[main] jobs: build: runs-on: ubuntu-latest steps: -name: Check out the repo uses: actions/checkout@v4 -uses: actions/setup-node@v4 with: node-version:22 registry-url:"https://registry.npmjs.org" -name: Install Dependencies run: npm ci -name: Build run: npm run build -name: Deploy to Juno uses: junobuild/juno-action@main with: args: deploy env: JUNO_TOKEN: ${{ secrets.JUNO_TOKEN }}
To set up Github Actions, we need a secret token that uniquely identifies our satellite. Github needs this secret token to associate our repo to the satellite.
Visit juno console, and select myBlogSatellite satellite.
Under the controllers tab, click add controller
Select 'Genetate new controller' and select 'Read-write' as the scope.
Click submit.
Once the new controller is generated, it will provide a secret token, copy and store the secret token.
To upload our code to our remote GitHub repository, we must establish a connection between our local project and the repository
Run the command below in your project terminal
git init git remote add origin https://github.com/sam-thetutor/myfirstBlog.git gitadd. git commit -m"my first commit" git push -u origin main
The above code established the required connection to our remote Github repo, and pushes all our project code to that repo.Now every time we make changes to our project, all we have to do is push these changes to our github repo and they will be deployed to our satellite automatically. Learn more about setting up Github Actions with Juno
Now that we have successfully hosted our blog website on the blockchain, you can go ahead and add more articles to the blog to showcase your skills. You can also add more features on the website to make it more robust.
In this article, we have looked at how to create a boilerplate template project using juno, how to create a satellite from the juno console, writing code for our project, how to connect the satellite to the our local project, deploying our project to the satellite and configuring Github Actions to automate compiling and deployment tasks for our project
👋
Stay connected with Juno by following us on Twitter to keep up with our latest updates.
And if you made it this far, we’d love to have you join the Juno community on Discord. 😉
Are you looking to extend Juno's features? Stop right there, because it is now possible!
I'm thrilled to unveil today's new addition to the set of features offered by Juno: the introduction of serverless Functions enabling developers to extend the native capabilities of Satellites. This groundbreaking update opens a plethora of opportunities for developers to innovate and customize their applications like never before.
In the realm of cloud computing, serverless architecture allows developers to build and run applications and services without the burden of managing infrastructure. This model enables the execution of server-side code based on user demand, allowing for direct interactions with APIs, databases, and other resources as part of your project's deployment. It's a paradigm that significantly reduces overhead and increases the agility of software development processes.
The introduction of serverless blockchain functions by Juno innovatively takes this concept a step further by integrating blockchain technology into this flexible and efficient framework. This groundbreaking development opens the door for extending the native capabilities of Satellites smart contracts, pushing the boundaries of what's possible within the blockchain space.
This means you can now enhance the functionality of Satellites smart contracts and extend those capabilities with anything that can be achieved on the Internet Computer blockchain.
At the core of Juno's serverless blockchain functions are hooks, which are essentially the backbone of how these functions operate within the ecosystem. These hooks are defined to automatically respond to event triggers related within your Satellite, including operations such as creating, updating, and deleting to documents and assets.
An essential feature of these optional hooks is their ability to spawn asynchronously, a design choice that significantly enhances the efficiency and responsiveness of applications built on Juno. This asynchronous spawning means that the hooks do not block or delay the execution of calls and responses between your client-side decentralized application (dApp) and the smart contract.
A picture is worth a thousand words, so here is a simplified schematic representation of a hook that is triggered when a document is set:
In addition to unveiling this new feature, we're also excited to introduce a brand-new developer experience we hope you're going to enjoy. This is built on the local development environment we released earlier this year, designed to make your work with Juno smoother and more intuitive.
note
Make sure you have Juno's CLI tool installed on your machine.
Start by ejecting the Satellite within your project. This step prepares your project for local development. Open your terminal and run:
juno dev eject
In a new terminal window, kick off the local development environment that leverages Docker:
juno dev start
Now, your local development environment is up and running, ready for you to start coding.
Once you're ready to see your changes in action, compile your code:
juno dev build
One of the key benefits of Juno's local development environment is its support for hot reloading. This feature automatically detects changes to your code and deploys them in the local environment. It means you can immediately test your custom code locally, ensuring a fast and efficient development cycle.
This sample application illustrates the use of Juno's serverless functions to perform asynchronous data operations with a small frontend client and backend hook setup.
The frontend client is designed to save a document in the Datastore, while the backend hook modifies this document upon being triggered. This process exemplifies the asynchronous capability of functions to read from and write to the Datastore.
To begin exploring this functionality, clone the example repository and prepare the environment with the following commands:
git clone https://github.com/junobuild/examples cd rust/hooks npm ci
After setting up the project, to start and debug the sample in your local environment, please follow the steps as outlined in the previous chapter Getting Started.
The core of this sample is the hook code, which is triggered upon the document set operation in a specific collection. Here’s the hook's logic:
#[on_set_doc(collections =["demo"])] asyncfnon_set_doc(context:OnSetDocContext)->Result<(),String>{ // Decode the new data saved in the Datastore letmut data:Person=decode_doc_data(&context.data.data.after.data)?; // Modify the document's data data.hello =format!("{} checked", data.hello); data.yolo =false; // Encode the data back into a blob let encode_data =encode_doc_data(&data)?; // Prepare parameters to save the updated document let doc:SetDoc=SetDoc{ data: encode_data, description: context.data.data.after.description, updated_at:Some(context.data.data.after.updated_at), }; // Save the updated document set_doc_store( context.caller, context.data.collection, context.data.key, doc, )?; Ok(()) }
This hook demonstrates asynchronous processing by reading the initial data saved from the frontend, modifying it, and then saving the updated version back to the Datastore. It's triggered specifically for documents within the "demo" collection and showcases how to handle data blobs, execute modifications, and interact with the Datastore programmatically.
As mentioned in the introduction, the serverless functions extend Juno's capabilities to anything that can be achieved on the Internet Computer. With this in mind, let's explore implementing HTTPS outcalls to a Web2 API in another sample.
To explore this advanced functionality, follow the steps below to clone the repository and set up the project:
git clone https://github.com/junobuild/examples cd rust/https-outcalls npm ci
After cloning and navigating to the correct directory, proceed with starting and debugging the sample in your local environment, as outlined in the Getting Started chapter.
The hook implemented in this sample interacts with the Dog CEO API to fetch random dog images and update documents within the dogs collection in the Datastore. Here's how it works:
// The data of the document we are looking to update in the Satellite's Datastore. #[derive(Serialize, Deserialize)] structDogData{ src:Option<String>, } // We are using the Dog CEO API in this example. // https://dog.ceo/dog-api/ // // Its endpoint "random" returns such JSON data: // { // "message": "https://images.dog.ceo/breeds/mountain-swiss/n02107574_1118.jpg", // "status": "success" // } // // That's why we declare a struct that matches the structure of the answer. #[derive(Serialize, Deserialize)] structDogApiResponse{ message:String, status:String, } #[on_set_doc(collections =["dogs"])] asyncfnon_set_doc(context:OnSetDocContext)->Result<(),String>{ // 1. Prepare the HTTP GET request let url ="https://dog.ceo/api/breeds/image/random".to_string(); let request_headers =vec![]; let request =CanisterHttpRequestArgument{ url, method:HttpMethod::GET, body:None, max_response_bytes:None, // In this simple example we skip sanitizing the response with a custom function for simplicity reason. transform:None, // We do not require any particular HTTP headers in this example. headers: request_headers, }; // 2. Execute the HTTP request. A request consumes Cycles(!). In this example we provide 2_000_000_000 Cycles (= 0.002 TCycles). // To estimate the costs see documentation: // - https://internetcomputer.org/docs/current/developer-docs/gas-cost#special-features // - https://internetcomputer.org/docs/current/developer-docs/integrations/https-outcalls/https-outcalls-how-it-works#pricing // Total amount of cycles depends on the subnet size. Therefore, on mainnet it might cost ~13x more than what's required when developing locally. Source: https://forum.dfinity.org/t/http-outcalls-cycles/27439/4 // Note: In the future we will have a UI logging panel in console.juno.build to help debug on production. Follow PR https://github.com/junobuild/juno/issues/415. // // We rename ic_cdk::api::management_canister::http_request::http_request to http_request_outcall because the Satellite already includes such a function's name. matchhttp_request_outcall(request,2_000_000_000).await{ Ok((response,))=>{ // 3. Use serde_json to transform the response to a structured object. let str_body =String::from_utf8(response.body) .expect("Transformed response is not UTF-8 encoded."); let dog_response:DogApiResponse= serde_json::from_str(&str_body).map_err(|e| e.to_string())?; // 4. Our goal is to update the document in the Datastore with an update that contains the link to the image fetched from the API we just called. let dog:DogData=DogData{ src:Some(dog_response.message), }; // 5. We encode those data back to blob because the Datastore holds data as blob. let encode_data =encode_doc_data(&dog)?; // 6. Then we construct the parameters required to call the function that save the data in the Datastore. let doc:SetDoc=SetDoc{ data: encode_data, description: context.data.data.after.description, updated_at:Some(context.data.data.after.updated_at), }; // 7. We store the data in the Datastore for the same caller as the one who triggered the original on_set_doc, in the same collection with the same key as well. set_doc_store( context.caller, context.data.collection, context.data.key, doc, )?; Ok(()) } Err((r, m))=>{ let message = format!("The http_request resulted into error. RejectionCode: {r:?}, Error: {m}"); Err(message) } } }
This sample not only provides a practical demonstration of making HTTP outcalls but also illustrates the enhanced capabilities that serverless functions offer to developers using Juno.
In conclusion, Juno's serverless functions mark a significant advancement in blockchain development, offering developers the tools to create more sophisticated and dynamic applications. This feature set not only broadens the scope of what can be achieved within Juno's ecosystem but also underscores the platform's commitment to innovation and developer empowerment. As we move forward, the potential for serverless technology in blockchain applications is boundless, promising exciting new possibilities for the future.
👋
Stay connected with Juno by following us on X/Twitter.