Agenda
To be announced soon!
- Day 1
- Day 2
We will tell the story of the emotional rollercoaster of Leanplum’s initial product-market fit discovery, rediscovery after a couple of pivots, and the successful acquisition by CleverTap, along with its own counter-intuitive journey to product-market fit.
We’re going to share why we decided to use Web Components instead of other popular component-based libraries like React to build our multibrand, multisite front-end architecture, how we built it and what we learned
When ChatGPT disrupted our industry, after the initial shock, I decided to give it a try. I am a technical person, but I didn’t do first-person software development for a long time. And the technologies changed over time. To create a modern microservices app, one needs a lot of knowledge about different stacks, languages, and implementations.
When I saw the true power of ChatGPT4, I decided to make an experiment. My Typescript or Javascript experience is definitely not production-grade. But I wanted to see how far ChatGPT4 will take me throughout my journey to develop a standard CRUD backend service.
I started with a standard description of what needs to be done: a backend service, which provides standard CRUD operations (and maybe a bit more, we’ll see!) for two data sets: Users and Games. I wanted to have the service in Typescript, implemented via Express and MySQL, and run by Node.js. The whole service had to be packaged in a container and run in my local Docker setup (for now).
I didn’t have the chance to write any example (or production, for that matter) code. I did a few Docker courses, but I’m very far from even a junior Docker specialist (but I can read and roughly understand Dockerfile). But I wanted to see if ChatGPT4 would enable me to build all this with my current (junior level) technological knowledge and if it would walk me safely through this implementation journey.
This presentation demonstrates my findings throughout this road. We will walk that road together, demonstrating and hopefully convincing everyone that ChatGPT4 is not here to make us hunter-gatherers but to enable an even higher level of productivity. Or does it?
Have you ever had to manage multiple repositories for your interdependent npm packages? If so, I bet you’ve struggled with it at some point. And quite some for that matter. You might think that migrating to a monorepo is the obvious solution, and most of the time you’ll be right. But what is the cost? In this session, we’ll talk about the pros and cons of each approach and how to navigate the challenges that come with them. Get an exclusive peek behind-the-scenes as I unveil my personal journey in developing one of the most extensive web components libraries!
We often find ourselves in complex situations like: “I’m not sure if I should leave my job because I’m unhappy with the salary, but I really like the team and the technologies I work with.” A common problem and drawback in the IT industry is high turnover and the mindset that there are plenty of job opportunities and we should change them frequently. However, this vicious cycle has many disadvantages, such as stress and wasting time. Fortunately, the solution is not limited to just two options, and we can greatly simplify such situations by considering our priorities.
Impostor Syndrome is a condition, mostly relevant to the workplace and with extremely high rates in the IT industry. It is the feeling of being a fraud that’s about to be exposed any minute now. It’s about not feeling like you deserve the position, salary, or praise you get at work. All of this, however, can be turned into a positive if you have the right tools.
Drones and multispectral cameras are becoming more affordable and their use in various fields will proliferate. We recently got the chance to work with a DJI P4 multispectral drone in the Precision Agriculture domain. In this session you will be introduced to the topic and will learn from our experiences. We will also outline some problems and possible future directions for research and applications.
Business guys often like to use terms and jargon that IT people don’t understand. Feels a bit intimidating sometimes. Especially when acronyms get into play. MSA, M&A, NDA, LOI, ROI, MOU, SOW, TTM, T&M, P&L, WIP, WBS… WT#!?
Let’s play a bit with terms like these – we’ll have some fun and you might also learn a thing or two.
In this session, I will show you:
– Why we decided to use Playwright over the existing automation written in Cypress.
– What advantages Playwright has over Cypress and Selenium.
– What we managed to achieve for a few months
During his talk, Taras will guide the audience through real-life cases and best practices on how to resolve issues related to the high cost of cloud services. In particular, he will focus on state-of-the-art approaches to designing and operating cloud environment cost-efficiently, as well as guidelines on how to build transparency and understanding among engineering teams and company stakeholders when addressing a sensitive topic of cost-optimization.
DevOps revolutionized Software engineering by adopting agile, lean practices and fostering collaboration. The same need exists in Data Engineering.
In this talk, Antoni will go over how to adopt the best DevOps practices in the space of data engineering. And the challenges in adopting them considering the different skill sets of the data engineers and the various needs.
– What is the API for Data?
– What types of SLOs and SLAs do data engineers need to track?
– How do we adapt and automate the DevOps cycle – plan, code, build, test, release, deploy, operate, and monitor data?
– Those are challenging questions, and the data engineering space has no good answer yet.
Antoni will demonstrate how a new open-source project Versatile Data Kit (https://github.com/vmware/versatile-data-kit), answers those questions and helps introduce DevOps practices in data engineering.
Progressing from simple websites with static content to sophisticated web applications offering personalized experiences and intricate functionalities, the internet has transformed into Web 3.0.
Let’s delve into the Quality Assurance aspects of the unexplored realms of blockchain technology—examining the challenges, tools, and opportunities they present.
Nowadays we all observe the vast speed of tremendously evolving technologies, more and more of which are facilitated by the Artificial Intelligence at its finest.
AI consist of two inevitably connected major parts:
Machine learning
and
Taking decisions (proposing an educated guessed opinion in different spheres of life) based on using really optimized algorithms which work over the big collected and classified statistical (and best practices) data sets.
Stepping even further we can observe that we already have an AI creating basic applications by facilitating the developers automate the routine code creation (code skeletons), as well as proposing better code design patters, refactoring and optimizations over an existing application code.
The main lecture topic here concerns one layer above – the software automation testing and how AI would facilitate it by helping creating test data sets, propose decisions and test strategies, help make the test results reporting better and even creating test automation code skeletons such as Page Object Models based on the DOM of a certain page.
Comparison of the main automation tools using AI, as well as how well they self educate and become better and better, (expectations versus reality) is also covered.
Securing services which are running on container OCI images through FIPS (Federal Information Processing Standards). Overview of how to secure OpenSSL, Java, Python and Go containers.
Why security matters most to government services.
This presentation will cover the evolution of an SRE team and the stages of the development of a complex all-inclusive monitoring system. We will discuss what were our main drivers, challenges faced by the team, how the design and architecture evolved in time, benefits and best practices.
Web 2.5: Decoding the Future for Developers” is your gateway to understanding one of the most transformative technologies of our era. Tailored specifically for developers unfamiliar with blockchain, this lecture breaks down its key concepts, terminologies, and potential impact. From understanding ‘blocks’ and ‘chains’ to delving into ‘consensus mechanisms’ and ‘smart contracts’, I will embark on a journey to illuminate the foundational pillars of the blockchain universe. Join me to explore this digital frontier and grasp the key terminologies that are reshaping the tech landscape.
A picture is worth a thousand words, they say – but unfortunately, it is not always the case. And this is even more valid for diagrams. If done right, they are a treasure – but we’ve all seen and suffered from spaghetti diagrams and ones bringing additional confusion rather than providing clarity and adding value.
The diagrams are a key element of the “language” we use in the software world. We use them to better explain architectures, data models, processes, and many other concepts. Through diagrams, we communicate with other IT people, business people, external stakeholders, and clients. We create diagrams for documentation, presentations, and proposals, we often do it ad-hoc on whiteboards during meetings and brainstorming discussions. Nevertheless, it’s a fact that very few of us have been trained on how to create effective diagrams. As a result, creating a diagram is a “pain” for many and what comes out is not always excellent, to say the least.
Join this session to level up your skills for creating effective diagrams. I’ll share 7 characteristics of a good diagram plus 7 practical tips and a checklist you can start applying immediately. Each of these – is presented with examples. Together we will go through the process of “refactoring” an architecture diagram and I will also share a step-by-step process for creating one from scratch – putting into practice all these principles and tips.
In this lecture, I will introduce the concept of multimodal deep learning and highlight the critical role of data fusion techniques. I’ll begin by explaining the principle of multimodality and how it aligns with the inherently multimodal nature of human cognition.
Through real-world examples, such as networks that merge audio and video, audio and accelerometer, or audio and text, I’ll illustrate how multimodal learning is implemented in practice.
A key part of the discussion will be devoted to data fusion techniques — early, late, and hybrid fusion. I’ll present their applications and discuss their respective advantages and potential limitations.
To conclude, I’ll provide a brief overview of the future of multimodal deep learning, touching on potential developments and challenges. The aim of this lecture is to offer a succinct yet comprehensive understanding of multimodal deep learning, demonstrating its transformative potential in the field of AI.
Speakers
Partners
ISTA Sponsors Brochure 2023 - Download Here
About
ISTA is not just a conference. Organized since 2011 by IT professionals for IT professionals, ISTA has become a tradition. This is one of the biggest and most prominent tech events in the region.
ISTA is the place to be for anyone who is truly passionate about information technology, development, quality, automation, and innovation. Throughout the years, ISTA has gathered IT professionals and world-renowned speakers who have shared their knowledge and expertise.
The conference is all about collaboration, knowledge sharing, meeting new friends, inspiring and being inspired in a world of constantly evolving technology.
Organized by leading IT companies in Bulgaria – Experian, Infragistics, Musala Soft, SAP & VMware – ISTA combines the ability of the five organizations to create INNOVATION, to SHARE KNOWLEDGE and to bring together people, who CHANGE THE WORLD.
Join our ISTA world of Discoverers and Innovators!