It’s been a while since my last #buildinpublic article collection, so it’s time for a new adventure! It all starts with a simple question: What if you could build a fully functional, cross-platform app, end-to-end, with an AI writing most of the code?
In this article series I will try to do exactly that and you’re invited along for the experiment. I will document the process of creating a Water Reminder App with AI as my primary development, product and UX partner.
Disclaimer: we will focus more on the strategy and human-AI interaction, not on the generated code itself.
My motivation for this experiment is a blend of a personal reason, my curiosity about the real potential of AI, and a desire to get back to writing.
I need consistency with my water intake: I have a chronic condition that has some side effects and risks that get aggravated if I don’t hydrate properly. Despite that, I’m quite terrible at it. There are plenty of reminder apps out there, but I thought I’d combine the fun of building my own with the goal of improving my hydration habits.
Exploring AI capabilities: the recent rapid advancements in LLMs are crazy. I already leverage it massively for non-code related things both at work and in my personal life (e.g. my latest training plan was created by ChatGPT). But I want to test how good it can do in something more advanced. Can it act as a full-stack developer? Can it understand context, make architectural decisions, and translate high-level product requirements into functional code?
The “Prompt-First” Engineer: Despite my engineering background, I’m intentionally making the decision to not write any code myself. I want to see how much can be achieved through prompt engineering while putting the LLM to a real test. My focus will be on guiding the LLM output and ensuring the app’s functionality and quality, rather than writing the code.
Sharing the Process: I’ve been feeling the motivation to start writing again and needed something to write about. I had good fun writing #buildinpublic series in the past and this idea felt like a nice opportunity to push this desire forward. And if even one person can learn from this experience or be inspired to explore something themselves, that’s already a win.
As mentioned above, for very personal reasons, the chosen project is a Water Reminder App. It’s a common concept but should offer enough complexity to be a good test case for this experiment. The app’s core functionality will be about tracking water intake, setting drinking reminders, and providing insights into hydration patterns. The end goal is a deployable, functional product that works seamlessly across different platforms (starting with Android and iOS, with potential for Web later).
This won’t be a one-shot build. I will take an iterative approach, and the series will reflect that. We’ll define feature by feature, build them, test, refine, and repeat.
Tech stack and tooling, every engineer’s favourite topic! I could probably have a dedicated collection of articles just talking about this, but I will spare you the very long reads and will go straight to the point.
My choice for a comprehensive AI partner in this project is Gemini Pro. The decision came down to a few key factors:
Cost-Effectiveness: This is the main factor, as a Google One subscriber, Gemini Pro is already part of my existing ecosystem, making it a highly accessible and cost-efficient choice for this personal project compared to other premium LLM offerings. Plus, my family also benefits from it through the same subscription.
Ecosystem Synergy: Being a Google model, I’m thinking that Gemini Pro potentially has a deeper, more intuitive understanding of Flutter and Dart with those being Google products, and are central to my chosen development framework. We’ll find out.
Beyond the IDE: While LLMs like Gemini can be integrated directly into IDEs as coding assistants, my primary interaction with Gemini Pro for this project, as mentioned before, goes far beyond a simple code-completion tool. My goal is for Gemini Pro to act as a holistic development partner, retaining and leveraging the entire product and project knowledge.
In this partnership, Gemini Pro is much more than just a code generator. It’s filling a multitude of roles, from ideation to technical architecture. It’s responsible for transforming my product visions into concrete technical solutions, a task that demands far more than just writing code.
My fellow Native Engineers will like me a little less for this one 🙈 but I’ve opted for using Flutter for this adventure.
Cross-Platform Power: Flutter’s ability to compile a single codebase into native applications for Android, iOS, and even Web (assuming it delivers as promised!) offers incredible reach. With my Android background, this also gives me a chance to ship on iOS, which I’m curious about beyond just a manager’s perspective.
Leveraging My Background: I want to stay within the mobile scope and Flutter is perfect for that. While Gemini Pro will be writing the code, my extensive background in mobile development and management in this space, provides crucial context to guide the LLM. I can direct it towards specific UX patterns, ensure native features are appropriately leveraged, and evaluate its output with some experience.
When it came to choosing the IDE to manage all of the generated code for the project, I went with VS Code. It’s more lightweight than Android Studio or IntelliJ. It has official support for everything Flutter and Dart to deliver apps across Android, iOS and Web, which is a crucial requirement. Plus, the smooth experience of deploying to Android and iOS emulators makes it great for easy testing and validation.
At last, as I’ve mentioned, my role in this unique setup isn’t that of a traditional coder. I will be operating more as the Product Owner, the Prompt Engineer and probably more. Of course, I’ll still wear my ’engineer hat’ frequently, diving in to direct the LLM on lower-level technical aspects when needed. My responsibilities include:
Defining the Vision: Articulating the “what” and “why” of the product I want to build.
Guiding with Prompts: Providing the LLM with detailed, and context-rich prompts to direct all its development efforts.
Reviewing and Iterating: Evaluating all AI-generated output, including code, design, and more, and provide feedback and direction for iterative refinements.
Ensuring Quality: Making sure the app adheres to good UX principles, leverages appropriate native features, and ultimately delivers a valuable user experience.
Now you know what to expect from this series. Ready to join the journey and find out if building an end-to-end product with an LLM is possible? Stick around and stay tuned for the next article!