Getting the right attention and setting priorities for development projects can be hard, especially now that many product businesses try to adapt to a world of online services and new revenue targets. As an architect, you understand the value of all the tasks in your backlog, but a limited budget and deadlines force you to prioritize. Inspired by the Strategyzer books, we applied value proposition design to build transformation plans for some of ASSA ABLOY Scandinavia’s software products. We profiled our customer’s jobs and pain points and matched them to values enabled by our software products. Gaps between the stories of our customers and the functionality in our software helped us prioritize our backlog and build a plan that our sales and business colleagues could understand. Do you want to overcome lengthy backlogs and tight development budgets? Would you like to win over your stakeholders by delivering customer value quickly and iteratively? In this session, you'll learn how to use Strategyzer's value proposition design method to prioritize and communicate your development plan.
Some business processes in medical diagnostics involve many large-scale systems distributed through the Roche enterprise and customer clinical laboratories. The systems fulfill valid purposes, can operate separately, and are managed for their own purpose rather than the purposes of the whole. Allocating business functions to specific system components and defining interfaces present significant challenges for architecting such large and complex IT and software system-of-systems (SoS). It requires supporting decision criteria with data and accommodating fast evolution of the architecture. For architecting SoS in Roche’s product context, we defined a customized method based on the U.S. Department of Defense Architecture Framework. The method has six steps, with start/finish criteria and deliverables for each. In this talk, I will illustrate the steps using examples from Roche. The method can be applied in Agile projects if the sprints are oriented on implementation of workflow. The customized method for architecting SoS at Roche was applied to business processes related to Remote Solutions and the Internet of Things. It facilitated quick definition of models, derived data to support decision making and allocation of business functions and resources to existing and to-be-developed systems, and identification of common resources and interfaces for their exchange.
Too much design up front, and you could lose time. Not enough design, and your system could crumble in reality. How do you make the right decisions at the right time, and make them with due diligence? How do you embrace the cloud and microservices without risking different failure scenarios or overly complicated maintenance and ripple effects? Product Thinking—understanding the use and need—is critical to avoid over-engineering a product. In this session, we will walk through visualizations that help teams blend product thinking with architecture. Along the way, we will look at microservices, domain modeling, chaos engineering, and fault tolerance and how to emphasize the right strategy at the right time. You will leave this session with simple visualizations and approaches that you can apply immediately to start blending product with architecture, especially if your system will run the cloud.
Have you heard the term container without fully understanding what it means? Then this session is for you! Containers are lightweight virtual machines that have become default packaging mechanisms for deploying systems. Docker is the pre-eminent container system. This session will provide some theory of containers and explanations of how containers work. Then in a supervised hands-on experience, participants will build, execute, deploy, and save a Docker container in a repository. Attendees should have preloaded Docker and executed "Hello World" before the session.
Docker has matured and is expanding from being predominantly used in the build and test stages to production deployments. Similarly, microservices are expanding from being used mostly for green-field web services to being used in the enterprise as organizations explore ways to decompose their monolith to support faster release cycles. Running microservices-based applications in a containerized environment makes a lot of sense, for both build and test as well as for runtime in production. Docker and microservices are natural companions, forming the foundation for modern application delivery. However, managing microservices and large-scale Docker deployments poses unique challenges for enterprise IT. This talk describes the modern requirements for building, deploying, and operating microservices on a large-scale Docker-ized infrastructure. It covers
In this talk, I will share lessons learned from implementing microservices architecture for a regulatory initiative as the Director of Software Engineering at Capital One. The talk will cover the business drivers for the microservices architecture, the lessons we learned from an architecture perspective, and why DevOps practices were important. Key success factors included moving to the cloud, using DevOps methods, and employing domain-driven design. Other practical considerations involved handling production support issues and increased complexity. I will also discuss what cultural and software engineering best practices helped most and some specific communication imperatives to emphasize when you are building an enterprise microservice.
A myriad of point tools, frameworks, and infrastructures are involved in your software delivery process. While many of these tools are free and open source, the operational and technology overhead of orchestrating the hand-offs from one tool to the next in the process are not without cost. To improve developer productivity and resource utilization—and to enable enterprise-scale, cross-project visibility and shorter time to market—organizations are working to automate the entire tool chain across the end-to-end delivery pipeline. This presentation will use a real case study to show how you can orchestrate your entire software delivery pipeline. Learn how to create a fully automated release pipeline by tying in common tools that you likely use in your process from CI build to release: including Git, Jenkins, Selenium, Chef, Docker, and more. Seamlessly orchestrate third-party tools to automate your entire process from start to finish, get visibility into your end-to-end application release pipeline, and deploy any application to any environment using any tool set—for free.
As your organization invests in DevOps to release more frequently, you need to treat the database tier as an integral part of the automated delivery pipeline—you build, test, and deploy database changes just as for any other part of your application. However, databases are different from source code and pose unique challenges to continuous delivery, especially in the context of deployments. Updating the database can be more demanding than updating the application layer: database changes are more difficult to test, and rollbacks are harder. Furthermore, for organizations that strive to minimize service interruption to end users, database updates with no downtime are laborious operations. Your database stores transaction data, business data, and user information—the most mission-critical and sensitive data of your organization. As you update your database, you want to ensure data integrity; atomicity, consistency, isolation, and durability (ACID); and data retention and have a solid rollback strategy in case things go wrong. This talk covers strategies for database deployments and rollbacks.
Being agile, with its attention on extensive testing, frequent integration, and important product features, has proven invaluable to many software teams. When building complex systems, it is easy to focus on features and overlook software qualities, specifically those related to software architecture. Time has shown that agile practices are not sufficient to prevent or eliminate technical debt, which can affect maintainability and reliability. Without good validation through tests and constant attention to the architecture and code quality, many issues arise. It is important to recognize what is core to the architecture and the problem at hand while evolving it. Insufficient attention to the architecture and code can allow technical debt to creep in and become muddy, making it hard to deliver new features quickly and reliably. Two principles that can help teams deliver more quickly and with confidence is to focus on code quality and delivery size. Small, frequent deliveries with constant attention to a good codebase are crucial to sustaining faster reliable delivery. Practices that can help keep the code clean or prevent it from getting muddier include Testing, Divide & Conquer, Gentrification, Quarantine, Refactoring, and Craftsmanship. This talk examines various practices and techniques that lead to better software quality, all of which enable teams to deliver faster and with more confidence.
Most people have heard of Bitcoin, and they know that blockchain is one of the underlying concepts behind this cryptocurrency. Blockchain is an emerging technology that is receiving a lot of interest—and hype—across enterprises and among analysts. But what does blockchain technology mean for the broader enterprise? It has the potential to greatly disrupt a large number of business processes across various verticals, far beyond Bitcoin. How can blockchain technologies be applied in different verticals, and what does this mean for traditional database-centric approaches? Join this session to learn about blockchain, associated concepts such as smart contracts, and how these technologies may be applied in various use cases and verticals.
This talk introduces “cognitive” APIs, such as those for image recognition, text analysis, recommendations, and predictions. I will explain what cognitive APIs are and how to use them and do a live comparison of four image-recognition endpoints. I will show the similarities in usage patterns and highlight the differences in API design, implementation, and capabilities, including pre-trained APIs and trainable-models-as-a-service. The second half of this talk is a live demo of an e-commerce chatbot that uses a combination of natural-language understanding and API orchestration to deliver commerce features over a conversational interface. I will then explain how the bot is built and where the “smarts” come from. I will discuss the ever-increasing importance of cognitive APIs and how no-screen interfaces like chatbots will stimulate the API economy.
The Colorado River Agency—among other Bureau of Indian Affairs utilities—has a disparate electrical system in Arizona spanning substation, multiple transmission, secondary and primary electrical networks, and apparatus. Information modeling and standards are key to apply reusable solutions.
Developing software for complex and ever-changing business domains is challenging enough, but factor in the need to integrate with multiple legacy systems while working closely with business experts and it can feel overwhelming. In EventStorming, developers and business experts use sticky notes to map out an event-based story of how a software system behaves. This improves communication and collaboration, uncovers misunderstandings early, and accelerates deeper domain knowledge. Learn how to facilitate an EventStorming workshop with your team, and see how the approach cultivates shared understanding and improves productivity, especially when designing loosely-coupled, distributed, event-based systems.
“Serverless” is a newly popular buzzword and, as with many technology buzzwords, is a complete misnomer (“cloud,” anyone?). In this session, we will discuss what serverless really means, what the differences are among various providers, why you would consider using a serverless architecture, and how you would implement one. We’ll discuss greenfield development as well as a migration path for existing web applications. Other topics will include how the development workflow will change and why, what testing will look like in a serverless world, and some pitfalls to avoid. While a serverless approach can be applied to applications written in a variety of languages, JavaScript will be the language used for this talk, so we will examine some popular serverless JavaScript frameworks, including apex.js, claudia.js, and serverless.js. If you’re interested in full-stack development, DevOps, microservice architecture, containers, cutting operational costs, or just keeping up with the latest application architecture approaches, this talk is for you. For this 90-minute, interactive, participatory workshop, you will need a laptop, a GitHub account, and an Amazon AWS account.