Development team management involves a combination of technical leadership, project management, and the ability to grow and nurture a team. These skills have never been more important, especially with the rise of remote work both across industries and around the world. The ability to delegate decision-making is key to team engagement. Review our inventory of tutorials, interviews, and first-hand accounts of improving the team dynamic.
Conflict Management in Technology Teams: Insights From Google's Project Aristotle
Creating Value With Scrum
Cultivating a culture of continuous improvement within Scrum teams or Agile teams is pivotal for personal well-being, enhancing effectiveness, building trust with stakeholders, and delivering products that genuinely enhance customers’ lives. This post dives into the top ten actionable strategies derived from the Scrum Anti-Patterns Guide book, providing a roadmap for teams eager to embrace Kaizen practices. From embracing Scrum values and fostering psychological safety to prioritizing customer feedback and continuous learning, these strategies offer a comprehensive approach to fostering innovation, collaboration, and sustained improvement. My Top Ten Continuous Improvement Actions for Teams To foster a culture of continuous improvement and embrace Kaizen practices, here are enhanced suggestions for Scrum or Agile teams looking to improve their effectiveness, build trust with stakeholders, and deliver products that significantly impact customers’ lives: Regular and Reflective Retrospectives Importance: Conducting regular Retrospectives enables teams to pause and reflect on their past actions, practices, and workflows, pinpointing both strengths and areas for improvement. This continuous feedback loop is critical for adapting processes, enhancing team dynamics, and ensuring the team remains agile and responsive to change. First Step: Guarantee the consistency of your Retrospectives at every Sprint's conclusion. Before these sessions, collaboratively plan an agenda that promotes openness and inclusivity. Facilitators should incorporate practices such as anonymous feedback mechanisms and engaging games to ensure honest and constructive discussions, setting the stage for meaningful progress and team development. Implement Improvement Actions Importance: Actioning identified improvements demonstrates the team’s commitment to continuous enhancement and ensures that insights gained from Retrospectives and feedback are put into practice, leading to tangible benefits and progress. First Step: At the end of each Retrospective, work together to prioritize improvement actions and decide on clear ownership. Integrate these actions into your team’s workflow, tracking them on the Kanban board or task list like any other task. Regularly review their progress in subsequent Retrospectives to confirm they’re on track for completion and to evaluate their impact, ensuring these efforts lead to meaningful improvements in your processes and outcomes. Offer support when responsible individuals struggle to make progress. Embrace Scrum Values Importance: Integrating Scrum values deeply into the team’s ethos fosters a work environment conducive to continuous improvement. These values guide behaviors and decision-making processes, ensuring that every team member is aligned and committed to the principles of Scrum, thus enhancing collaboration and effectiveness. First Step: Host a dedicated session where the team collaboratively discusses each Scrum value and identifies specific actions or behaviors that exemplify these values in their daily work. Create a team charter or working agreement that incorporates these values and commits to holding each other accountable—as professionals do. Build Psychological Safety Importance: Psychological safety is the foundation of a team’s ability to innovate, take risks, and communicate openly without fear of negative consequences. It is essential for fostering an environment where continuous improvement can thrive, as team members feel comfortable sharing ideas, challenges, and feedback. First Step: Kick off an assessment of the team’s current psychological safety status through anonymous surveys. Follow up with a workshop focused on active listening, empathy building, and conflict resolution skills. Regularly check in on progress and set psychological safety as a recurring agenda item in Retrospectives. Also, reach out to the leadership level when the team’s safety is compromised at the organizational level to initiate a discussion on how to improve the situation. Promote Stakeholder Collaboration Importance: Effective stakeholder collaboration ensures the team’s efforts align with the broader business goals and customer needs. Engaging stakeholders throughout the development process invites diverse perspectives and feedback, which can highlight unforeseen areas for improvement and ensure that the product development is on the right track. First Step: Engage your stakeholders as a team, starting with the Sprint Reviews. Moreover, develop clear communication channels and feedback mechanisms to facilitate ongoing dialogue; you want feedback from your stakeholders not just at the Sprint Review. Remember to tailor the communication to meet your stakeholders’ needs; sometimes, this may require a written report. From time to time, consider offering joined stakeholder-team Retrospectives. Empower Team Decision-Making Importance: Allowing the team to make decisions about their work increases their sense of ownership and responsibility towards the outcomes. This level of self-management is crucial for fostering an environment where continuous improvement is driven by those closest to the work, leading to more effective and timely improvements. First Step: Begin by crafting a decision-making framework that encourages collaboration and aims for unanimous agreement rather than starting with a consent-based or majority-decision-based model. Jumping straight into consent or majority decisions can hinder team unity and lead to the formation of factions. Apply this inclusive approach to decisions identified during Retrospectives. It provides a platform for the team to practice and refine their collective decision-making skills, ensuring every team member’s viewpoint is acknowledged and valued. Utilize Visual Management Tools Importance: Visual management tools like Kanban boards provide transparency about work progress, priorities, and bottlenecks. This visibility helps the team manage their workflow more effectively, identify improvement opportunities, and make informed decisions. First Step: Collaboratively establish or enhance your Kanban board to ensure it genuinely represents the team’s workflow. It should incorporate features like columns or signals for work-in-progress limits, blocked tasks, and quality control checkpoints. Make it a routine to assess and update the board in Retrospectives, adapting it to evolving team needs and processes. Consider techniques like “walking the board” during Daily Scrum sessions. Focus on Customer Feedback Importance: Prioritizing customer feedback grounds the team’s efforts in real user needs and experiences, driving improvements directly relevant to customer satisfaction and product success. It ensures the team remains focused on delivering value and solving the right problems. First Step: Establish a systematic process for gathering feedback, which could involve organizing user interviews, deploying surveys, or conducting beta testing sessions. Integrate regular feedback review periods into Retrospectives, Sprint Planning and Sprint Review sessions, or Product Backlog refinement meetings to ensure customer insights are continuously woven into the development cycle. Cultivate a Growth Mindset Importance: Encouraging a growth mindset within the team fosters an attitude of learning and resilience. It helps team members view challenges and setbacks as opportunities for growth, driving personal and team development and innovation. First Step: Facilitate a workshop on understanding and embracing a growth mindset, featuring exercises to dismantle fixed mindset beliefs. Motivate team members to identify and pursue personal development objectives and to frequently exchange insights and achievements, possibly through mechanisms like a ‘learning log’ or in the context of Sprint Retrospectives. Continuous Learning and Skill Development Importance: Investing in continuous learning and skill development ensures the team remains adaptable and capable of overcoming new challenges. It supports the evolution of team capabilities and the introduction of innovative solutions, keeping the team and product at the forefront of industry trends. First Step: Allocate a specific time each Sprint for team members to engage in learning activities, such as online courses, workshops, and brown-bag, pair- or mob programming sessions. Promote disseminating newly acquired knowledge and abilities via ‘learning showcases’ or internal mini-workshops, fostering a shared growth and expertise culture. Additional Considerations for Continuous Improvement While the strategies provided offer a solid foundation for fostering a culture of continuous improvement within Scrum and Agile teams, remember that the journey towards improvement is ongoing and unique to each team’s context. Here are a few additional considerations: Customization is key: Adapt these strategies to fit your team’s specific needs, challenges, and dynamics. What works for one team may not work for another. Measure progress: As a team, establish metrics or indicators to track the effectiveness of implemented changes. Measuring progress helps understand the improvements’ impact and guides further adjustments. Leadership support: Ensure leadership is on board and supportive of these initiatives. Their backing can significantly influence the success of efforts to foster a continuous improvement culture. Celebrate successes: Recognize and celebrate the successes, no matter how small. Acknowledging achievements boosts morale and reinforces the value of the continuous improvement efforts. Stay patient: Change takes time. Encourage patience and persistence among team members. Continuous improvement is a marathon, not a sprint; the benefits accumulate over time. By considering these additional points, you will be better positioned to navigate the complexities of implementing continuous improvement practices and cultivating a resilient and innovative team culture. Conclusion Embracing continuous improvement through these ten strategies is crucial for Scrum and Agile teams aiming to elevate their performance and product quality. By regularly reflecting on practices, including everyone on the team, and focusing on customer-centric solutions, teams can foster a dynamic environment where innovation flourishes. Implementing these suggestions will improve team dynamics and stakeholder relationships and ensure that products continually evolve to meet and exceed customer expectations. Start small, prioritize actionable changes, and build momentum toward cultivating a robust culture of continuous improvement and excellence in your team’s journey. How does your team practice continuous improvement? Please share your experience with us in the comments.
In today's dynamic business landscape, where retailers, banks, and consumer-facing applications strive for excellence and efficiency in customer support, the reliance on tools like JIRA for project management remains paramount. However, the manual creation of tickets often results in incomplete information, leading to confusion and unnecessary rework, particularly in sectors where live chatbots play a crucial role in providing real-time support to end-users. In this article, we'll explore how AI chatbots, powered by large language models, can streamline manual ticket creation. With artificial intelligence in play, businesses can reshape their project management strategies and deliver flawless customer support experiences. Solution The proposed solution will leverage ChatGPT, a large language model from OpenAI. We are going to leverage LangChain, an open-source library to facilitate the smooth integration with OpenAI. Please note that you can also leverage Llama2 models with LangChain for this use case. Figure 1: Leveraging LLM-enabled chatbot The solution components include: LangChain agents: The fundamental concept behind agents involves using a language model to decide on a sequence of actions. Unlike chains, where actions are hardcoded into the code, agents utilize a language model as a reasoning engine to ascertain which actions to execute and in what sequence. Tools: When constructing the agent, we will need to provide it with a list of tools that it can use. We will create a custom tool for Jira API. Chat memory: LangChain agents are stateless they don't remember anything about previous interactions. Since we want the AI model to collect all the relevant information from the user before creating the JIRA ticket we need the model to remember what the user provided in the previous conversation. Installing LangChain Let's first install all the dependencies: Python pip install langchain-openai langchain atlassian-python-api -U Let's set the environment variables: Python import os os.environ["JIRA_API_TOKEN"] = "<jira_api_token>" os.environ["JIRA_USERNAME"] = "<jira_username>" os.environ["JIRA_INSTANCE_URL"] = "<jira_instance_url>" os.environ["OPENAI_API_KEY"]= "<open_api_key>" Now, let's initialize the model. For this article, we will leverage OpenAI models. Python from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="model-name", temperature=0) Creating Tools We will define the input schema using the Pydantic Python library. Pydantic is a data validation and settings management library in Python that is widely used for defining data models. Pydantic guarantees that input data conforms to specified models, thereby averting errors and discrepancies. It aids in generating documentation from field descriptions, thereby enhancing comprehension of data structures. Let's take a look at the schema defined using the Pydantic Python library: Python from langchain.pydantic_v1 import BaseModel, Field from typing import List, Optional, Type class TicketInputSchema(BaseModel): summary: str = Field(description="Summary of the ticket") project: str = Field(description="project name", enum=["KAN","ABC"]) description: str = Field(description="It is the description of the work performed under this ticket.") issuetype: str = Field(description="The issue type of the ticket ", enum=["Task", "Epic"]) priority: Optional(str) = Field(description="The issue type of the ticket ", enum=["Urgent", "Highest","High", "Low", "Lowest"]) Based on the code summary above, project, description, and issue type are required while priority is optional. This @tool decorator is the simplest way to define a custom tool. The decorator uses the function name as the tool name by default, but this can be overridden by passing "ticketcreation-tool" as our tool name. We will pass the args_schema as TicketInputSchema as defined above using Pydantic. This will force the language model to first ensure the schema is validated before proceeding with tool invocation. Additionally, we will include a docstring to help the language model understand the purpose of this tool and the expected output structure. We will leverage JiraAPIWrapper provided by LangChain, which is a class that extends BaseModel and is a wrapper around atlassian-python-api. The atlassian-python-api library provides a simple and convenient way to interact with Atlassian products using Python. Python from langchain.utilities.jira import JiraAPIWrapper Let's look at the complete code: Python @tool("ticketcreation-tool", args_schema=TicketInputSchema) def ticketcreation( summary: str, project: str, description: str, issuetype: str, priority: str) -> dict: """ This tool is used to create a jira issue and returns issue id, key, links""" import json payload = json.dumps({ "project": { "key": project }, "summary": summary, "description": description "issuetype": { "name" : "Task" }, "priority": { "name": priority }, # "custom_field_10010":{ # "value": impact # } }) response = JiraAPIWrapper().issue_create(payload) return response We will use the code below to bind the tools with the model: Python tools = [ticketcreation] llm_with_tools = llm.bind(tools) Memory Management This solution will leverage ConversationBufferMemory. Python from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory(memory_key="history", return_messages=True) Defining the Prompt In a LangChain OpenAI prompt, system messages offer context and instructions, followed by placeholders for user input and agent scratchpad. The system message component in the prompt lets the model know the context and provides guidance. Here is a sample system message that I have used: Python ( "system", """ You are skilled chatbot that can help users raise Jira tickets. Ask for the missing values. Only Allow values from allowed enum values """, ) Our input variables will be limited to input, agent_scratchpad, and history. input will be provided by the user during invocation, containing instructions for the model. agent_scratchpad will encompass a sequence of messages containing previous agent tool invocations and their corresponding outputs. history will hold interaction history and generated output. Here is a sample history object: [HumanMessage(content='Can you help me create a jira ticket'), AIMessage(content='Sure, I can help with that. Please provide me with the details for the Jira ticket you would like to create.')], 'output': 'Sure, I can help with that. Please provide me with the details for the Jira ticket you would like to create.'} And here is the prompt code using ChatPromptTemplate: Python from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder prompt = ChatPromptTemplate.from_messages( [ ( "system", """ You are skilled chatbot that can help users raise jira tickets. Ask for the missing values. Only Allow values from allowed enum values. """ ), MessagesPlaceholder(variable_name="history"), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"), ] ) Agent Pipeline This pipeline represents the sequence of operations that the data goes through within the agent. The pipeline below is defined using the pipeline operator "|" which ensures that the steps are executed sequentially. Python from langchain.agents.format_scratchpad.openai_tools import format_to_openai_tool_messages from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser agent = ( { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_tool_messages( x["intermediate_steps"] ), "history": lambda x: x["history"], } | prompt | llm_with_tools | OpenAIToolsAgentOutputParser() ) The purpose of the "OpenAIToolsAgentOutputParser()" component in the pipeline is to parse and process the output generated by the agent during interaction. Agent Executor The agent executor serves as the core engine for an agent, managing its operation by initiating activities, executing assigned tasks, and reporting outcomes. The following code demonstrates the instantiation of AgentExecutor. Python from langchain.agents import AgentExecutor agent_executor = AgentExecutor(agent=agent, tools=mytools, verbose=True, memory=memory, max_iterations=3) Session Management To manage sessions when executing the tool, we will use ChatMessageHistory, a wrapper that offers easy-to-use functions for storing and retrieving various types of messages, including HumanMessages, AIMessages, and other chat messages. The RunnableWithMessageHistory encapsulates another runnable and oversees its chat message history. It's responsible for both reading and updating the chat message history. Python from langchain_community.chat_message_histories import ChatMessageHistory from langchain_core.runnables.history import RunnableWithMessageHistory message_history = ChatMessageHistory() agent_with_chat_history = RunnableWithMessageHistory( agent_executor, lambda session_id: message_history, input_messages_key="input", history_messages_key="history", ) By default, the encapsulated runnable expects a single configuration parameter named "session_id," which should be a string. This parameter is utilized to either create a new chat message history or retrieve an existing one. Python agent_with_chat_history.invoke( {"input": message}, config={"configurable": {"session_id": conversation_id}, ) Conclusion Integrating AI chatbots driven by large language models offers businesses a significant chance to enhance internal operations, streamlining project management and customer support. However, security and hallucination concerns may hinder immediate adoption by external consumers. Careful consideration of these factors is essential before implementing AI chatbots for customer-facing purposes.
Microsoft Research, a key player in the technology research landscape, has established a unique lab structure that fosters tech leadership and innovation. In this article, we delve into the various aspects of Microsoft Research Labs' management approach, highlighting data-driven insights that showcase their success in fostering innovation and leadership. Encouraging Autonomy and Flexibility: Impact on Research Output Researchers at Microsoft Research enjoy a high level of autonomy and flexibility in selecting their research projects. This freedom nurtures creativity, risk-taking, and groundbreaking ideas. As a result, Microsoft Research has published over 20,000 peer-reviewed publications and filed more than 10,000 patents since its inception in 1991. Flat Organizational Structure: Accelerating Decision-Making Microsoft Research's flat organizational structure facilitates direct access to senior management and decision-makers. By reducing bureaucracy and streamlining decision-making, researchers can quickly pivot their projects and respond to new opportunities. This agility has contributed to the rapid development and integration of innovations like Microsoft Azure Machine Learning, HoloLens, and Xbox Kinect into commercial products. Goal Setting and Performance Evaluation: Fostering Collaboration Microsoft Research emphasizes setting clear goals and expectations for researchers, focusing on both short-term objectives and long-term visions. By tracking metrics such as collaboration levels, knowledge sharing, and interdisciplinary research, Microsoft Research has successfully fostered a culture of teamwork and innovation. For example, the company's annual internal TechFest event brings together researchers from different labs to showcase their work and collaborate on new ideas. Collaboration Tools and Platforms: Facilitating Global Connectivity Microsoft Research leverages various tools and platforms to facilitate seamless collaboration among researchers, both within and across labs. Researchers have access to cutting-edge resources like Microsoft Teams, SharePoint, and Azure DevOps, enabling them to work together efficiently across geographical boundaries. This global connectivity has led to numerous cross-lab collaborations, such as Project Premonition, which combines expertise from the Redmond, Cambridge, and New York labs to develop early warning systems for disease outbreaks. Internal Knowledge Sharing Events: Promoting Continuous Learning Microsoft Research organizes various internal events, such as conferences, workshops, and lecture series, where researchers can share their work, learn from others, and build relationships. These events have proven effective in fostering a culture of knowledge sharing and continuous learning. For instance, the Microsoft Research Faculty Summit brings together hundreds of academic researchers each year to discuss emerging trends and collaborate on new projects. Emphasis on Diversity and Inclusion: Driving Innovation Microsoft Research recognizes the value of diversity and inclusion in driving innovation. The organization actively promotes a diverse workforce, ensuring that researchers from different backgrounds, cultures, and perspectives can contribute their unique insights. As a result, Microsoft Research has received numerous accolades for its commitment to diversity, including being named one of the "Top 50 Companies for Diversity" by DiversityInc. Continuous Learning and Skill Development: Preparing Researchers for the Future Microsoft Research is committed to supporting the continuous learning and skill development of its researchers. The organization offers various resources, such as training programs, workshops, and access to online courses to help researchers stay up-to-date with the latest advancements in their fields and develop new skills. As an example, Microsoft Research's partnership with MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) allows researchers to participate in joint research projects and gain exposure to cutting-edge techniques. Conclusion Microsoft Research's lab structure plays a crucial role in fostering tech leadership and driving innovation. By encouraging autonomy, promoting collaboration, setting clear goals, leveraging modern tools, and emphasizing diversity and continuous learning, Microsoft Research creates an environment where researchers can thrive and contribute to the cutting edge of technology. The data-driven insights presented in this article demonstrate the effectiveness of Microsoft Research's management approach, serving as a model for other organizations looking to foster innovation and develop tech leaders of the future.
I knew a Chief Software Architect from a major financial organization who was an anomaly: he had never developed software professionally. His undergraduate degree is in Accounting and Finance, and his most hands-on technology role was as a DBA. [His LinkedIn profile lists an early Programmer role, though he insisted he didn’t.] Even so, he was well-respected within his organization for his thought leadership and solutions, but nevertheless, it seemed an unusual career path. Since I last worked with him, he has moved into C-level roles at other organizations, confirming his abilities as a technology leader. Then I thought of others I have worked with who are non-technical but positioned to impact technical direction and realized their lack of understanding impacted (and continues to impact) the quality of the software solutions we, as engineers, are expected to deliver. Chief Non-Technical Officer This CTO has been with her/his company for many years in many roles: Director of Support, Chief Strategy Officer, Chief Cultural Officer, and Chief Technical Officer. S/he does not deny that s/he is not a strong technologist – and at times a badge of honor – yet confidently states decisions and direction that they become a fait accompli: alternatives that challenge her/his understanding are not often well received. At times, her/his inner circle helps to form a more nuanced understanding, but only to a point: overcoming her/his existing preconceived notions is difficult, and blatant opposition results in being sidelined from future discussions. By no means is s/he a total technical novice, but fundamental change requires extensive effort and time. Her/his oft-repeated mantra went something like this: Don’t tell me you’re refactoring; refactoring brings no value to our customers. Harking back to her/his strategy days, where feature-feature-feature is the overwhelming driver, this mantra confirmed her/his denial or lack of understanding of the current state of the product. The growing and maturing customer base made clear that areas of the product needed love and attention, but proposed efforts to address them were not prioritized because – in her/his view of the world – there was no visible benefit to their customers, at least when focused on customers asking for new or extended features. The real technologists of the company understood the potential benefits to both the customer and company: performance and scaling improvements, reduced cloud costs, faster deployments, fewer outages, faster feature delivery, reduced technology stack, and consistent and intuitive user experience. Regardless of potential benefits, nothing called out as refactoring would survive planning. The problems continued to grow, and the problems continued to be ignored. Sigh. Product To be clear, I have no interest in becoming a product owner: the wide-ranging responsibilities require a breadth of knowledge and experience not often found in a single person, while their many stakeholders – both internal and external – have contradictory goals and agendas that need to be balanced. I view it as a political role, finding compromises that please (appease) most, with no one getting everything s/he desired. This role is not for the weak and timid. Once we accept that product owners are unlikely to have the background or experiences necessary to handle all responsibilities, we can then understand why the focus is on those responsibilities understood or deemed important by their leaders. Outside of organizations offering technical solutions, product owners often have a stronger business understanding than technology understanding based on their work experience. Perhaps not surprisingly, the product is defined by business expectations more so than technical requirements: future features and functionality are defined by understanding industry trends, reviewing customer feedback, interpreting — sales and usage analytics, defining the user experience, etc. In essence, the product owner is an overclocked business analyst. Real-World Example A particular product manager focused only on rapidly releasing new features regardless of technical stability. Over time, the issues rose to the point where outages – not processing failures, actual outages – occurred daily and could no longer be ignored. She continued to view the work as unnecessary and not beneficial to the product, resulting in this exchange during quarterly planning: The result is product owners often eschew – whenever possible – technology and technical viability aspects of the product, reducing the impact of technology during product planning. Instead of top-down planning, individual engineers attempt to push technical issues bottom-up, which is very difficult and often unsuccessful. Organizations require a strong engineering discipline and culture to offset the business focus of product owners, but it remains a frustrating challenge. [Of course, production technology issues do arise that demand immediate attention, but the resulting work is stressful, particularly for the engineers who are responsible for implementing the changes required; the result is often a one-off effort rather than fundamentally changing the overall culture.] The Not-Ready-For-Prime-Time Implementation This is less about an individual or role but rather an organizational culture problem: proof-of-concepts assumed to be production-ready. Software proofs-of-concept (POCs) are created to test new business concepts or determine the usefulness or applicability of new technology. POCs should be created with minimal engineering rigor that allows a quick and cheap implementation to be discarded without guilt once the results are evaluated. Most important, it is not intended to be a workable product. Despite these clear expectations, too often, I’ve seen the business get excited at seeing the POC and want it available to customers immediately. The POC might be slightly enhanced or it might be unaltered, but it’s out there for the world (internal or external) to use. And when the problems start appearing – because, by definition, it was not intended for real-world usage – the finger-pointing begins. Agile advocates snigger and say You needed an MVP, silly! but my experiences are much the same as POCs: poor. By definition, an MVP is a complete application without the bells and whistles, but corners are inevitably cut: crawling (of crawl/walk/run paradigm) when current volumes require walk, run, or even fly; minimal/non-existent observability; non-standard user experience; incomplete or incorrect API definitions; security through obscurity; incomplete error handling. When leaders decide to move forward after a successful MVP, the expectation is to expand and enhance the MVP implementation; in fact, it may be better to start over. [I am not disavowing MVPs’ usefulness but rather am clarifying that organizations misuse/abuse the term and are, in fact, creating glorified POCs that are not complete, are not ready for users, and are not production ready. Just saying…] So when you next hear of an access application that is integrated into the enterprise supply chain workflow, don’t say I didn’t warn you. Organizations who make ignorant decisions on the production-readiness of applications shouldn’t know why failures occur later, yet they do, and the engineers are left to pick up the pieces. What Can You Do? It’s not hopeless, really. It isn’t …. not necessarily fun, but there are strategies that you can attempt. Gather Create a personal archive of articles, use cases, scenarios, and data that allows you to tell stories to non-technical people, helping them understand the tradeoffs present in all organizations. Internally, you might be interested in estimated vs. actual effort for feature delivery, production failure rates, or implementation costs mapped to the customer base. Are cloud costs increasing faster than customer growth? Did assumptions made during initial implementation impact the ability to deploy future features, whether positive or negative? Is supposedly important work upended by unknown and unplanned initiatives? Did a potential security breach impact customer confidence? What was the cost of researching a potential security breach? Is data quality affecting your reporting, analytics, and billing? There are many different ways to try and understand what’s happening within your organization. Almost daily, there are new articles online that highlight the issues and problems other organizations experience: Southwest’s 2022 holiday meltdown, a ransomware attack on Vital Care Providers, and Cloudfare’s bad software deployment. Not every organization publishes postmortems, but details often leak through other channels. Perhaps more importantly, your organization doesn’t want to appear in those articles! Educate As most non-technical folks appear unable or unwilling to accept that software is hard, our responsibility – for better or worse – is to show and explain. Unique situations require adjusting the story told, but it is necessary – and never-ending – to have any chance to get the organization to understand: explaining how software is developed and deployed, demonstrating how a data-driven organization requires quality data to make correct decisions, explaining the advantages and disadvantages of leveraging open source solutions; showing examples of how open source licenses impact your organization’s intellectual property. Look for opportunities to inject background and substance when appropriate, as education is open-ended and never-ending. Often, it will appear no one is listening as you repeat yourself, but eventually – hopefully – someone will parrot what you’ve been saying for months. Negotiate Aside from those employed in purely research and development roles, engineering/technology for engineering/technology's sake is not feasible, as technology concerns must be balanced with business concerns: product and its competitors, sales pipeline, customer support and feature requests, security, privacy, compliance, etc. Each decision has its short- and long-term impacts, and it is very unlikely that all involved will be pleased. Sorry, but that’s corporate politics. That does not mean you roll over and play dead, but rather horse trade, often with management and product, to ensure the technical concerns aren’t forgotten: Ensure that changes in business priorities are coupled with impact analysis on in-process development efforts; Accept less-than-optimal initial implementations with the agreement of fast-follow work to address compromises; Define metrics that identify when technology-focused work should be prioritized over feature work. These ideas may or may not apply to your organization or situation, but hopefully, they will give you ideas that may be pursued. Conclusion The problems I’ve discussed are age-old and have seemed to become worse in recent decades, so I’m not sure if any of what I’ve discussed is a surprise. Perhaps this is only the latest incarnation of the problem and post-Agile a new approach will reap benefits. Perhaps leaders will acknowledge that engineers really do understand the problems and are trusted to implement a solution rather than given solutions that fit an arbitrary (and often unrealistic) timeline. It’s a tug-of-war that I don’t yet see resolved. Image Credits “Pointy Hair Boss” © Scott Adams “Productivity: Putting the Kanban Display Together” by orcmid is licensed under CC BY 2.0. “Analog circuit board prototype” by mightyohm is licensed under CC BY-SA 2.0.
DevOps encompasses a set of practices and principles that blend development and operations to deliver high-quality software products efficiently and effectively by fostering a culture of open communication between software developers and IT professionals. Code reviews play a critical role in achieving success in a DevOps approach mainly because they enhance the quality of code, promote collaboration among team members, and encourage the sharing of knowledge within the team. However, integrating code reviews into your DevOps practices requires careful planning and consideration. This article presents a discussion on the strategies you should adopt for implementing code reviews successfully into your DevOps practice. What Is a Code Review? Code review is defined as a process used to evaluate the source code in an application with the purpose of identifying any bugs or flaws, within it. Typically, code reviews are conducted by developers in the team other than the person who wrote the code. To ensure the success of your code review process, you should define clear goals and standards, foster communication and collaboration, use a code review checklist, review small chunks of code at a time, embrace a positive code review culture, and embrace automation and include automated tools in your code review workflow. The next section talks about each of these in detail. Implementing Code Review Into a DevOps Practice The key principles of DevOps include collaboration, automation, CI/CD, Infrastructure as Code (IaC), adherence to Agile and Lean principles, and continuous monitoring. There are several strategies you can adopt to implement code review into your DevOps practice successfully: Define Clear Goals and Code Review Guidelines Before implementing code reviews, it's crucial to establish objectives and establish guidelines to ensure that the code review process is both efficient and effective. This helps maintain quality as far as coding standards are concerned and sets a benchmark for the reviewer's expectations. Identifying bugs, enforcing practices, maintaining and enforcing coding standards, and facilitating knowledge sharing among team members should be among these goals. Develop code review guidelines that encompass criteria for reviewing code including aspects like code style, performance optimization, security measures, readability enhancements, and maintainability considerations. Leverage Automated Code Review Tools Leverage automated code review tools that help in automated checks for code quality. To ensure proper code reviews, it's essential to choose the tools that align with your DevOps principles. There are options including basic pull request functionalities, in version control systems such as GitLab, GitHub, and Bitbucket. You can also make use of platforms like Crucible, Gerrit, and Phabricator which are specifically designed to help with conducting code reviews. When making your selection, consider factors like user-friendliness, integration capabilities with development tools support, code comments, discussion boards, and the ability to track the progress of the code review process. Related: Gitlab vs Jenkins, CI/CD tools compared. Define a Code Review Workflow Establish a clear workflow for your code reviews to streamline the process and avoid confusion. It would help if you defined when code reviews should occur, such as before merging changes, during feature development, or before deploying the software to the production environment. Specify the duration allowed for code review, outlining deadlines for reviewers to provide feedback. Ensure that the feedback loop is closed, that developers who wrote the code address the review comments, and that reviewers validate the changes made. Review Small and Digestible Units of Code A typical code review cycle should involve only a little code. Instead, it should split the code into smaller, manageable chunks for review. This would assist reviewers in directing their attention towards features or elements allowing them to offer constructive suggestions. It is also less likely to overlook critical issues when reviewing smaller chunks of code, resulting in a more thorough and detailed review. Establish Clear Roles and Responsibilities Typically, a code review team comprises the developers, reviewers, the lead reviewer or moderator, and the project manager or the team lead. A developer initiates the code review process by submitting a piece of code for review. A team of code reviewers reviews a piece of code. Upon successful review, the code reviewers may request improvements or clarifications in the code. The lead reviewer or moderator is responsible for ensuring that the code review process is thorough and efficient. The project manager or the team lead ensures that the code reviews are complete within the decided time frame and ensuring that the code is aligned with the broader aspects of the project goals. Embrace Positive Feedback Constructive criticism is an element, for the success of a code review process. Improving the code's quality would be easier if you encouraged constructive feedback. Developers responsible, for writing the code should actively seek feedback while reviewers should offer suggestions and ideas. It would be really appreciated if you could acknowledge the hard work, information exchange, and improvements that result from fruitful code reviews. Conduct Regular Training An effective code review process should incorporate a training program to facilitate learning opportunities for the team members. Conducting regular training sessions and setting a clear goal for code review are essential elements of the success of a code review process. Regular trainings play a role, in enhancing the knowledge and capabilities of the team members enabling them to boost their skills. By investing in training the team members can unlock their potential leading to overall success, for the entire team. Capture Metrics To assess the efficiency of your code review procedure and pinpoint areas that require enhancement it is crucial to monitor metrics. You should set a few tangible goals before starting your code review process and then capture metrics (CPU consumption, memory consumption, I/O bottlenecks, code coverage, etc.) accordingly. Your code review process will be more successful if you use the right tools to capture the desired metrics and measure their success. Conclusion Although the key intent of a code review process is identifying bugs or areas of improvement in the code, there is a lot more you can add to your kitty from a successful code review. An effective code review process ensures consistency in design and implementation, optimizes code for better performance and scalability, helps teams collaborate to share knowledge, and improves the overall code quality. That said, for the success of a code review process, it is imperative that the code reviews are accepted on a positive note and the code review comments help the team learn to enhance their knowledge and skills.
A sprint retrospective is one of the four ceremonies of the Scrum. At the end of every sprint, a product owner, a scrum master, and a development team sit together and talk about what worked, what didn’t, and what to improve. The basics of a sprint retrospective meeting are clear to everyone, but its implementation is subjective. Some think the purpose of a sprint retrospective meeting is to evaluate work outcomes. However, as per the Agile Manifesto, it is more about evaluating processes and interactions. It says, “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” Many scrum teams are not making most of the sprint retrospective meetings due to a lack of understanding. In this post, we will look at what to avoid in a sprint retrospective meeting and what to do to run an effective sprint retrospective meeting. What To Avoid in a Sprint Retrospective Meeting? A sprint retrospective meeting is an opportunity for the scrum team to come together and discuss the previous sprint with the purpose of making improvements in processes and interactions. But scrum teams often end up making a sprint retrospective meeting a negative talk of beating one another and less interesting due to the lack of implementation of outcomes of sprint retrospective meetings. Here are a few things that you need to avoid in a sprint retrospective meeting: 1. Focusing on the Outcomes The end goal of a sprint retrospective is undoubtedly increasing the sprint velocity of the team, but the process to do so is not talking about the outcome of the sprint. The focus is on finding areas that can be improved in processes and people to make it easy and efficient for the scrum team to work together. 2. Not Involving All Team Members’ Voice The output of a scrum team is evaluated as a team, not as each individual. Therefore, it is important that each member of the Scrum team is heard. Thus, the equal participation of the team members is required in retros. If someone has issues and they are not addressing them, it is going to impact the sprint output as the members of the sprint team are highly dependent on each other to achieve sprint goals. 3. Talking Only About What Went Wrong The purpose of a sprint retrospective is to make improvements, but it does not mean you do not talk about good things. We all are human beings and need appreciation. If you talk only about what did not work, a sprint retrospective will become more of a tool of blaming and beating each other rather than an instrument of improvement. Above all, it is important to talk about what went well so that you can replicate good things in the next sprint. 4. Not Taking Action on Retro Outcomes The worst thing that can happen for a sprint retrospective is not taking action items derived from it. This will lead to a loss of interest and trust in sprint retrospectives as it sends a message to the team that their feedback is not valuable. What To Do To Run an Effective Sprint Retrospective Meeting There are some basics you can follow to run an effective sprint retrospective meeting. Have a look at them. 1. Create a Psychologically Safe Space for Everyone To Speak It is the responsibility of a product owner and a scrum master to create a psychologically safe environment for everyone to speak up in the meeting to make a sprint retrospective successful. If you are asking questions like what went well during the last sprint, what didn’t go well, and what should we do differently next time, everyone should feel safe to share their views without any repercussions. 2. Use a Framework The best way to conduct an effective sprint retrospective meeting is to follow a template. Experts have created various frameworks for conducting effective sprint retrospective meetings. The top frameworks include: Mad, Sad, Glad Speed car retrospective 4 L's Retrospective Continue, stop, and start-improve What went well? What didn't go well? What is to improve? These frameworks help ensure that you are talking about processes, not people. For example, the Mad, Sad, Glad framework talks about asking what makes the team mad, sad, and glad during the sprint and how we can move from mad, sad columns to glad columns. Use a framework that works for your scrum team. 3. Have a Facilitator-In-Chief Like any other meeting, a sprint retrospective meeting needs to have a goal, a summary, and a facilitator. Have a facilitator-in-chief to make sprint retrospectives valuable. Usually, the role is dedicated to the scrum master whose responsibility in sprint retrospective is to: Set the agenda and goals of the sprint retrospectives. Collect feedback from all the team members on the action items to talk about in the retro. Defining the length of the meeting. Follow up with action items implemented in the last sprints. Summarizing the key action items for the next sprint. 4. Implement the Action Items The responsibility of the scrum master does not end with a sprint retrospective. A scrum master needs to make sure that action items found in the sprint retrospective are implemented in the upcoming sprint. Daily stand-up meetings are a great tool for the scrum master to ensure that the team is implementing what is agreed upon & discussed and making improvements. Also, you can see the results of sprint retrospectives in tangible terms with metrics like sprint velocity. 5. Positivity, Respect, and Gratitude for Everyone Lack of engagement is the biggest challenge of sprint retrospectives in the long run. It occurs when action items are not worked on, people are not heard, and the focus is on negatives. Cultivate positivity and have respect and gratitude for everyone. Talks about what can be improved rather than blaming individuals. Listen to others to mark respect and express gratitude to address everyone’s contributions. Paired with the implementation of action items, you can ensure that your scrum team sees sprint retrospectives as an opportunity to improve. Conclusion Sprint retrospective is a great opportunity to look past what worked well, what went wrong, and what we can do ahead to improve. It is a great instrument for a business to improve efficiency, keep its workforce happy, and build products that both clients and end-customers love. The only challenge is you need to utilize it appropriately. With insights shared in this post, there are high chances you will be able to run effective sprint retrospective meetings and bring actual value to the table.
In this article, we are going to look at the challenges faced when rapidly scaling engineering teams in startup companies as well as other kinds of companies with a focus on product development. These challenges change between different types of companies, sizes, and stages of maturity. For instance, the growth of a consultancy software company focused on outsourcing is so different from a startup focused on product development. I've faced much team growth and also seen the growth of teams in several companies, and most of them have faced the same challenges and problems. Challenges The following are some of the challenges or problems that we will have to address in high-growth scenarios: Growth is aligned with productivity: many companies grow, but the output is unfortunately far from the goals. Avoid team frustration due to failure to achieve growth goals. Avoid too much time being consumed with the hiring process for the engineering teams. Avoid the demotivation of newcomers due to chaotic onboarding processes: the onboarding process is the first experience in the company. Maintain and promote the cultural values defined by the organization. The impact on delivery is aligned with the defined goals and risks. New hires meet expectations and goals in terms of value contribution. Navigating the Challenges Goals Goals are the main drivers of the growth strategy. They need to be challenging, but also realistic, and linked to mid-term and long-term vision. Challenging: Push the team to go beyond their comfort zone and strive for excellence. It requires effort, innovations, planning, and agility. Realistic: Ensure the goals can be achieved to avoid lead with frustration and burnout. The growth of the company and its success have to enhance the motivation and inspiration of the team. Long-term: Goals have to be aligned with the company's long-term vision and in a wide range. Large growth cannot be organized with the next three months in mind, because that may be the time it takes to find suitable candidates. Goals have to be measurable, clear, and specific to: Promote accountability Evaluate and measure the goal's success Take data-driven decisions All growth requires dedication and effort from the team; time that they will not dedicate to product evolution or development. Example: Unrealistic Goal Let's suppose we have a team of 10 engineers divided into 2 squads: backend and platform. The company set the following goals: Triplicate the team in 1 year, from 10 to 30 engineers. Keep the delivery performance. Create three news squads: DevOps, Data Platform, and Front End. Promote the culture. Only hire top-tier engineers. Most likely, the number of candidates we will have to evaluate in interviews and technical exercises will be at best four candidates for each position in addition to the time dedicated to the onboarding process. Usually, there is more than one engineer collaborating in the hiring process so we are likely to have a significant impact on delivery. Finding a team of experienced and highly qualified people is not an easy task. It is necessary to define what we consider "talent" and the different levels at which we can hire. Maintaining and promoting the culture in a high-growth environment where in one year there are more new people than the team we have is very complex and requires a good strategy, definition of objectives, and agility in decision-making. With this, we want to reflect that one of these objectives would already be ambitious - but all of them together make it practically impossible to achieve success. Talent Acquisition and Hiring Process The talent acquisition team plays a crucial role in a company's growth strategy, but they need the support of all of the company. C-Levels and hiring managers have to provide all the support and be involved as the same team. Clear Communication Foster open and clear communication between teams to ensure that everyone understands the goals and the role each team plays in the process. Review Pipeline Quality Sometimes many candidates go through the early stages of the pipeline and are ultimately discarded, and this generates a lot of frustration in the engineering team because the analysis of each candidate requires significant effort. It is important to adjust the requirements and search criteria for candidates in the early stages of the pipeline and this requires constant communication between the teams. Market Knowledge Talent acquisition teams should provide insights into market trends and competitor strategies. This knowledge provides important information to the company to define the expectations and strategy and stay ahead in the market. Cultural Values It is important to keep in mind that each engineer who joins our team brings his or her own culture based on factors such as work experience, personality, or the country where they live. Although these people fit the cultural pattern we are looking for, most of the time they do not have the culture of the company, and the hiring process is not reliable. If maintaining the culture is important to the company, we need to mentor new employees starting with the recruitment process itself. Promote values in the hiring process. Promote values in the company and team onboarding process. Promote values during the first years through the mentoring process. Promoting the cultural values and the company's goal are tasks that must be done continuously, but we must reinforce and review them with new hires more frequently. On-Boarding In my opinion, the onboarding process has a huge role in almost all companies and is not given enough attention. It is especially important in high-growth companies. The two main problems are: No onboarding process: Onboarding is focused on a meeting with human resources, another with the manager, and finally the team: a three-hour process. This can only be considered as a welcome meeting. Highly technical processes: Processes very oriented to perform your first deployment and that mainly promote knowledge silos and little engagement with the company. The onboarding process must be led by the organization. It must be structured and must encourage a smooth integration of new hires into the organization, minimizing disruptions and maximizing productivity over time. In addition, the entire onboarding process should be a step-by-step process with as much documented support as possible. This would be a base structure for a complete onboarding process: Pre-boarding: It includes all the activities that occur between the acceptance of the offer and the new hire's first day. Continuous communication is important because it promotes motivation and cultural values and helps to create a feeling within the company. Welcome Day: Welcome meeting, company overview, review of company policies and cultural values Paperwork, documentation, and enrollment processes Initial equipment setup Introduction to Team and Manager Security training Company 360 (scheduled by month): 360-degree meetings with leaders from all departments provide valuable insights, foster collaboration, and help new employees understand the broader organizational context. Starting the first week: Cultural values and goals: The manager and the team share the same cultural values and team goals. The goals have to be clear and most of them measurable. Mentorship: Assign a mentor to support the integration process at least during the first year. Engineering Tech best practices and tools: Share the vision of architecture principles, DevOps, data principles, tools, and best practices of the organization. Roles-specific training Team integration: Start participating in team meetings. Feedback and evaluation: Feedback must always be continuous, honest, and constructive. We must put more emphasis on new hires to adjust goals, mentoring, or training. It would be better to start with one-to-one and include this evaluation and feedback in these sessions. Starting in the third month: Performance evaluation Continuous learning is part of the cultural values but at this time learning paths could be considered Initiate conversations about long-term career paths. It is important to avoid onboarding processes based solely on pairing or shadowing strategies because they require too much effort and also only generate silos and misalignment. These sessions are important but must be supported by documentation from both the organization and the team itself. Impact on Delivery The growth phase often requires a high investment of time, effort, and people in the hiring and onboarding process. Hiring process: Participating in technical sessions, reviewing candidate profiles, and reviewing technical exercises. Onboarding: The process of onboarding a new engineer to a team is always time-consuming and usually involves a learning curve until these people can offer more value than the effort invested in their integration. In the case of large growth, there may be situations in which teams are formed entirely by new engineers. This also has an impact on delivery, because these teams need: Mentors and support to adapt to the new environment Transversal coordination with other squads Talent Density In my opinion, growth should be based on the amount of talent and not on the number of engineers. At this point, there are a number of aspects to consider: What does talent mean to our organization? Finding talent is very complicated. There is a lot of competition in the market, people specialized in hiring processes, and the pressure to grow. Many people mistake talent for knowledge or years of experience. In my case, I have always given more value to the person's potential for the new role and for the organization rather than the experience in the role or the companies in which he/she has worked. The fit of a new hire is not only restricted to the hiring process but also to the evaluation period. Moreover, it is during the evaluation period that we can really evaluate the person. It is in this period when the decision is less painful for both parties, a person who does not fit in the organization will generate a negative impact both for him and for the organization. Team Topology These growth scenarios require changes in the organization and the creation of new teams or departments. Two fundamental factors must be taken into account: Team creation strategy Conway's Law Team Creation Strategy There are several strategies for developing the organization of teams: Integrate new hires into existing squads. Integrate new hires into existing squads and after some time, divide the team in two. Create entirely new teams with new hires. Create a new team from current leadership and new hires. The decision to apply a single approach or a combination of several approaches depends on several factors, including the organization's specific needs, resource availability, and long-term objectives. Conway's Law Conway's Law is a principle in software engineering and organizational theory: Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure. Conway's Law suggests that the communication patterns, relationships, and team structures within an organization are reflected in the architecture, design, and interfaces of the software or systems they build. Summary The growth of engineering teams is one of the most complex challenges facing a growing organization, especially if this growth must be aligned with productivity and cultural goals. Hiring the number of people we have set as a target can be easy. Hiring the right people can be almost impossible and hiring a ratio of enough talented people is very difficult. This can only be done well if you work as a team.
GenAI is everywhere you look, and organizations across industries are putting pressure on their teams to join the race – 77% of business leaders fear they’re already missing out on the benefits of GenAI. Data teams are scrambling to answer the call. However, building a generative AI model that actually drives business value is hard. And in the long run, a quick integration with the OpenAI API won’t cut it. It’s GenAI, but where’s the moat? Why should users pick you over ChatGPT? That quick check of the box feels like a step forward. Still, if you aren’t already thinking about how to connect LLMs with your proprietary data and business context actually to drive differentiated value, you’re behind. That’s not hyperbole. This week, I’ve talked with half a dozen data leaders on this topic alone. It wasn’t lost on any of them that this is a race. At the finish line, there are going to be winners and losers: the Blockbusters and the Netflixes. If you feel like the starter’s gun has gone off, but your team is still at the starting line stretching and chatting about “bubbles” and “hype,” I’ve rounded up five hard truths to help shake off the complacency. 1. Your Generative AI Features Are Not Well Adopted, and You’re Slow to Monetize “Barr, if generative AI is so important, why are the current features we’ve implemented so poorly adopted?” Well, there are a few reasons. One, your AI initiative wasn’t built to respond to an influx of well-defined user problems. For most data teams, that’s because you’re racing, and it’s early, and you want to gain some experience. However, it won’t be long before your users have a problem that GenAI best solves, and when that happens – you will have much better adoption compared to your tiger team brainstorming ways to tie GenAI to a use case. And because it’s early, the generative AI features that have been integrated are just “ChatGPT but over here.” Let me give you an example. Think about a productivity application you might use every day to share organizational knowledge. An app like this might offer a feature to execute commands like “Summarize this,” “Make longer,” or “Change tone” on blocks of unstructured text. One command equals one AI credit. Yes, that’s helpful, but it’s not differentiated. Maybe the team decides to buy some AI credits, or perhaps they just simply click over on the other tab and ask ChatGPT. I don’t want to completely overlook or discount the benefit of not exposing proprietary data to ChatGPT. Still, it’s also a smaller solution and vision than what’s being painted on earnings calls across the country. That pesky middle step from concept to value. So consider: What’s your GenAI differentiator and value add? Let me give you a hint: high-quality proprietary data. That’s why a RAG model (or sometimes, a fine-tuned model) is so important for Gen AI initiatives. It gives the LLM access to that enterprise's proprietary data. (I’ll explain why below.) 2. You’re Scared To Do More With Gen AI It’s true: generative AI is intimidating. Sure, you could integrate your AI model more deeply into your organization’s processes, but that feels risky. Let’s face it: ChatGPT hallucinates and can’t be predicted. There’s a knowledge cutoff that leaves users susceptible to out-of-date output. There are legal repercussions to data mishandling and providing consumers with misinformation, even if accidental. Sounds real enough, right? Llama 2 sure thinks so. Your data mishaps have consequences. And that’s why it’s essential to know exactly what you are feeding GenAI and that the data is accurate. In an anonymous survey, we sent to data leaders asking how far away their team is from enabling a Gen AI use case, one response was, “I don’t think our infrastructure is the thing holding us back. We’re treading quite cautiously here – with the landscape moving so fast and the risk of reputational damage from a ‘rogue’ chatbot, we’re holding fire and waiting for the hype to die down a bit!” This is a widely shared sentiment across many data leaders I speak to. If the data team has suddenly surfaced customer-facing, secure data, then they’re on the hook. Data governance is a massive consideration and a high bar to clear. These are real risks that need solutions, but you won’t solve them by sitting on the sideline. There is also a real risk of watching your business being fundamentally disrupted by the team that figured it out first. Grounding LLMs in your proprietary data with fine tuning and RAG is a big piece to this puzzle, but it’s not easy… 3. RAG Is Hard I believe that RAG (retrieval augmented generation) and fine-tuning are the centerpieces of the future of enterprise generative AI. However, RAG is a simpler approach in most cases; developing RAG apps can still be complex. Can’t we all just start RAGing? What’s the big deal? RAG might seem like the obvious solution for customizing your LLM. But RAG development comes with a learning curve, even for your most talented data engineers. They need to know prompt engineering, vector databases and embedding vectors, data modeling, data orchestration, data pipelines, and all for RAG. And, because it’s new (introduced by Meta AI in 2020), many companies just don’t yet have enough experience with it to establish best practices. RAG implementation architecture Here’s an oversimplification of RAG application architecture: RAG architecture combines information retrieval with a text generator model, so it has access to your database while trying to answer a question from the user. The database has to be a trusted source that includes proprietary data, and it allows the model to incorporate up-to-date and reliable information into its responses and reasoning. In the background, a data pipeline ingests various structured and unstructured sources into the database to keep it accurate and up-to-date. The RAG chain takes the user query (text) and retrieves relevant data from the database, then passes that data and the query to the LLM in order to generate a highly accurate and personalized response. There are a lot of complexities in this architecture, but it does have important benefits: It grounds your LLM in accurate proprietary data, thus making it so much more valuable. It brings your models to your data rather than bringing your data to your models, which is a relatively simple, cost-effective approach. We can see this becoming a reality in the Modern Data Stack. The biggest players are working at a breakneck speed to make RAG easier by serving LLMs within their environments, where enterprise data is stored. Snowflake Cortex now enables organizations to analyze data and build AI apps directly in Snowflake quickly. Databricks’ new Foundation Model APIs provide instant access to LLMs directly within Databricks. Microsoft released Microsoft Azure OpenAI Service, and Amazon recently launched the Amazon Redshift Query Editor. Snowflake data cloud I believe all of these features have a good chance of driving high adoption. But, they also heighten the focus on data quality in these data stores. If the data feeding your RAG pipeline is anomalous, outdated, or otherwise untrustworthy data, what’s the future of your generative AI initiative? 4. Your Data Isn’t Ready Yet Anyway Take a good, hard look at your data infrastructure. Chances are, if you had a perfect RAG pipeline, fine-tuned model, and clear use case ready to go tomorrow (and wouldn’t that be nice?), you still wouldn’t have clean, well-modeled datasets to plug it all into. Let’s say you want your chatbot to interface with a customer. To do anything useful, it needs to know about that organization’s relationship with the customer. If you’re an enterprise organization today, that relationship is likely defined across 150 data sources and five siloed databases…3 of which are still on-prem. If that describes your organization, it’s possible you are a year (or two!) away from your data infrastructure being GenAI-ready. This means if you want the option to do something with GenAI someday soon, you need to be creating useful, highly reliable, consolidated, well-documented datasets in a modern data platform… yesterday. Or the coach will call you into the game, and your pants will be down. Your data engineering team is the backbone for ensuring data health. A modern data stack enables the data engineering team to monitor data quality continuously in the future. It’s 2024 now. Launching a website, application, or any data product without data observability is a risk. Your data is a product, requiring data observability and governance to pinpoint data discrepancies before they move through an RAG pipeline. 5. You’ve Sidelined Critical Gen AI Players Without Knowing It Generative AI is a team sport, especially when it comes to development. Many data teams make the mistake of excluding key players from their Gen AI tiger teams, and that’s costing them in the long run. Who should be on an AI tiger team? Leadership, or a primary business stakeholder, to spearhead the initiative and remind the group of the business value. Software engineers will develop the code, the user-facing application, and the API calls. Data scientists consider new use cases, fine-tune their models, and push the team in new directions. Who’s missing here? Data engineers. Data engineers are critical to Gen AI initiatives. They will be able to understand the proprietary business data that provides the competitive advantage over a ChatGPT, and they will build the pipelines that make that data available to the LLM via RAG. If your data engineers aren’t in the room, your tiger team is not at full strength. The most pioneering companies in GenAI are telling me they are already embedding data engineers in all development squads. Winning the GenAI Race If any of these hard truths apply to you, don’t worry. Generative AI is in such nascent stages that there’s still time to start back over and, this time, embrace the challenge. Take a step back to understand the customer needs an AI model can solve, bring data engineers into earlier development stages to secure a competitive edge from the start, and take the time to build a RAG pipeline that can supply a steady stream of high-quality, reliable data. And invest in a modern data stack. Tools like data observability will be a core component of data quality best practices – and generative AI without high-quality data is just a whole lot of fluff.
Junior, Middle, and Senior are how a Software Engineer (SWE) career looks, right? But what does this mean? Different companies have different definitions, so borders are blurred. In this article, I’m going to share with you my considerations regarding levels in software engineering and try to rethink what the path might look like. A kind of disclaimer: this is only my vision and not the ultimate truth, so I’m happy to hear your feedback. What Is Wrong With Current Levels They are polysemantic. From what I can see on the market, from my experience, and those I tracked, different companies have different definitions of Junior/Middle/Senior engineers. Some of them have even more: Staff, Principal, and Distinguished engineers to have a better expression of seniority of highly experienced individual contributors. One of the key problems with “Senior SWE” is that people with absolutely different experiences might get this title. Technically, a Senior Mobile Engineer is not the same as a Senior Frontend Engineer or a Senior Backend Engineer. There are different specializations, and in general, that would not be correct to move from SME to SBE without any downgrade, but why? Logic dictates soft skills are the same, and life experience is the same as well (because it is still the same person). Only one thing changed - the ability to solve problems. You might be an extremely experienced Mobile Developer, but you have never solved issues within a web browser, problems with distributed systems, etc. So, let me take this particular criterion as a separator between levels. The first milestone is simple problem-solving. 1. Pathfinder: Random-Way Simple Problem Solver Originally, a pathfinder was someone who found or created a path through an unexplored or wild area. This term was often used to describe explorers or scouts who ventured into unknown territories, paving the way for others to follow. They were crucial in mapping new lands and navigating through difficult terrains. Sometimes, I hear, “You have to look after Junior dev, but the Middle one is working fully on one’s own." Is that true? Not. People on the Middle level usually do not care about the wider picture of the world, so they cannot make the best decision just by design. In any way, you need to look after every developer to help to stay within the project’s range of norms and not to allow to leak of over-engineered solutions. So, Random-way Simple Problem Solver (author: RwSPS short scares me as well), a.k.a Pathfinder, is able to solve atomic problems (not decomposable in a relevant way). Think about one task that someone else prepared for Pathfinder. Would you name it Junior? Middle? It doesn’t matter because you measure the results of these guys by their ability to solve business problems. Ok, Pathfinder will solve your problems. SOMEHOW, is it just enough? It depends. If you are creating a short-living project and all tasks are straightforward, a group of pathfinders will probably be enough. But for a long-living project, you need to solve problems, especially in a simple way. Otherwise, maintenance becomes a nightmare. 2. Specialist: Simple-Way Simple Problem Solver Specialist has deep, extensive knowledge and expertise in a specific field. Specialists are highly skilled in their area, often focusing on a narrow aspect of a discipline. Simple-way Simple Problem Solver (SwSPS), a.k.a Specialist, is the next level after Pathfinder. The key difference is that Specialists have enough experience to solve simple problems predictably in a simple way. That might be a proper framework/library usage, assembling solutions from existing components. For example: If Pathfinder tries to handle nulls by IFs, the Specialist will use nullable types to strict nullability by design. If Pathfinder might add logging at the start and the end of every method explicitly, Specialists will just use Aspect Oriented Programming (those who believe that AOP is unacceptable should throw tomatoes in the comments) If Pathfinder might refactor ten lines of code one by one. Specialists will use IDE’s multi-cursor to introduce changes in many places simultaneously. With experience and seeing more and more code that works, mastering tooling Specialists will provide more reliable solutions faster. This level is limited to atomic tasks only. Sounds like the next growth point! 3. Generalist: Random-Way Complex Problem Solver Generalist solves problems by synthesizing and applying knowledge from various domains. They are often effective in dealing with new or unforeseen challenges due to their adaptable and flexible approach. What is outside of a simple problem? Other simple problems! The source of all these simple ones is one or more complex issues that software engineers have to decompose before they start working. Let’s define a complex problem. In the context of this article, a complex problem is a problem that might be decomposed for the sake of improved predictability of implementation time. Also, a complex problem might consist of other complex problems that should be decomposed eventually. The key difference between a Random-way Complex Problem Solver (Generalist) and a Simple-way Simple Problem Solver (Specialist) is scale. A generalist is still able to solve simple problems in a simple way, but experience relevant to complex problems is not enough to follow the same approach for complex tasks. Here are a few examples: Generalists might design a new complex system starting with microservices and ignoring the fact that the customer-facing systems have <10 unique users in total yet. Generalists might start from on-premise db instead of just relying on managed services even if requirements do not specify that need and the key motivation is past experience. Generalists might bring redundant complex technologies from the previous company, ignoring the fact that the previous and the current ones are at different stages of business maturity. Getting experience in solving complex problems, Generalist starts finding ways to quickly get simpler solutions, and it means that the next level is coming. 4. Navigator: Simple-Way Complex Problem Solver Historically, navigators were crucial on ships and aircraft. In modern contexts, the term is used in roles requiring strategic planning and direction-setting, like in project management or leadership positions in companies. Counterintuitive that solving problems in a simple way is more complex, but the nature behind this fact is ignorance. At the beginning of your path, you have a high level of unawareness about already available solutions and ready-to-go components. Sometimes, they appear while you develop your own. Simple-way Complex Problem Solvers (Navigator) can deeply and seamlessly dive into an unknown environment, map their experience, find a simple solution, and basically have this expectation of something available instead of reinventing the wheel. A few examples: Navigator would never start by creating a marketplace if the business is about selling things but not SaaS. Navigator would research available opportunities before planning and designing. Navigator is fine with Google Sheets + Forms to launch the business. Navigator provides relevant solutions to the current business stage. Profits of the Alternative Level Classification Relative to the classical level set, this gradation: Is more transparent in terms of specific business requirements. Is measurable in practical tasks. Does correlate with experience. Does align expectations of a particular company. Conclusion The past “Senior” job title doesn’t say anything about your real ability to solve complex problems in the new company. People should align their skills with reality and not pretend to be seniors only because they already have this title. Movement through Pathfinder -> Specialist -> Generalist -> Navigator requires constant self-educating, so don’t waste your time. And please, don’t tell me that I showed myself as Pathfinder when describing such a simple topic in such a complex classification :)
Meetings are a crucial aspect of software engineering, serving as a collaboration, communication, and decision-making platform. However, they often come with challenges that can significantly impact the efficiency and productivity of software development teams. In this article, we will delve deeper into the issues associated with meetings in software engineering and explore the available data. The Inefficiency Quandary Meetings are pivotal in providing context, disseminating information, and facilitating vital decisions within software engineering. However, they can be inefficient, consuming a substantial amount of a software engineer’s workweek. According to Clockwise, the average individual contributor (IC) software engineer spends approximately 10.9 hours per week in meetings. This staggering figure amounts to nearly one-third of their workweek dedicated to meetings. As engineers progress in their careers or transition into managerial roles, the time spent in meetings increases. One notable observation is that engineers at larger companies often find themselves in even more meetings. It is commonly referred to as the “coordination tax,” where the need for alignment and coordination within larger organizations leads to a higher volume of meetings. While these meetings are essential for keeping teams synchronized, they can also pose a significant challenge to productivity. The Cost of Unproductive Meetings The impact of meetings on software engineering extends beyond time allocation and has financial implications. Research by Zippia reveals that organizations spend approximately 15% of their time on meetings, with a staggering 71% of those meetings considered unproductive. It means that considerable time and resources invested in discussions may not yield the desired outcomes. Moreover, unproductive meetings come with a substantial financial burden. It is estimated that businesses lose around $37 billion annually due to unproductive meetings. On an individual level, workers spend an average of 31 hours per month in unproductive meetings. It not only affects their ability to focus on critical tasks but also impacts their overall job satisfaction. The Impact on Software Engineering In the realm of software engineering, the inefficiencies and challenges associated with meetings can have several adverse effects: Delayed Development: Excessive or unproductive meetings can delay project timelines and hinder software development progress. Reduced Productivity: Engineers forced to spend a significant portion of their workweek in meetings may struggle to find uninterrupted “focus time,” which is crucial for deep work and problem-solving. Resource Drain: The coordination tax imposed by meetings can strain resources, leading to increased overhead costs without necessarily improving outcomes. Employee Morale: Prolonged or unproductive meetings can decrease job satisfaction and motivation among software engineers. Ineffective Decision-Making: When meetings are not well-structured or attended by the right participants, critical decisions may be postponed or made without adequate information. Meetings are both a necessity and a challenge in software engineering. While they are essential for collaboration and decision-making, the excessive time spent in meetings and their often unproductive nature can hinder efficiency and impact the bottom line. In the following sections, we will explore strategies to address these challenges and make meetings in software engineering more effective and productive. The Benefits of Efficient Technical Meetings in Software Engineering In the fast-paced world of software engineering efficient technical meetings can be a game-changer. They are the lifeblood of collaboration, problem-solving, and decision-making within development teams. In this article, we’ll explore the advantages of conducting efficient technical meetings and how they can significantly impact the productivity and effectiveness of software engineering efforts. Meetings in software engineering are not mere formalities; they are essential forums where ideas are exchanged, decisions are made, and project directions are set. However, they can quickly become a double-edged sword if not managed effectively. Inefficient meetings can drain valuable time and resources, leading to missed deadlines and frustrated teams. Efficiency in technical meetings is not just a buzzword; it’s a critical factor in the success of software engineering projects. Here are some key benefits that efficient meetings bring to the table: Time Savings: Efficient meetings are succinct and stay on topic. It means less time spent in meetings and more time available for actual development work. Improved Decision-Making: When meetings are focused and well-structured, decisions are made more swiftly, preventing bottlenecks and delays in the development process. Enhanced Collaboration: Efficient meetings encourage active participation and open communication among team members. This collaboration fosters a sense of unity and collective problem-solving. Reduced Meeting Fatigue: Prolonged, unproductive meetings can lead to fatigue, hindering team morale and productivity. Efficient meetings help combat this issue. Knowledge Sharing: With a focus on documentation and preparation, efficient meetings facilitate the sharing of insights and knowledge across the team, promoting continuous learning. We will delve into a five-step methodology to achieve these benefits to make technical discussions more efficient. While not a silver bullet, this approach has proven successful in many scenarios, particularly within teams of senior engineers. This methodology places a strong emphasis on documentation and clear communication. It encourages team members to attend meetings well-prepared, with context and insights, ready to make informed decisions. By implementing this methodology, software engineering teams can balance the need for collaboration and the imperative of focused work. In the following sections, we will explore each step of this methodology in more detail, understanding how it can revolutionize the way software engineers conduct technical meetings and, ultimately, how it can drive efficiency and productivity within the team. Step 1: Context Setting The initial step involves providing context for the upcoming technical discussion. Clearly articulate the purpose, business requirements, and objectives of the meeting. Explain the reasons behind holding the meeting, what motivated it, and the criteria for considering it a success. Ensuring that all participants understand the importance of the discussion is critical. Step 2: Send Invitations With Context After establishing the context, send meeting invitations to the relevant team members. It is advisable to provide at least one week’s notice to allow participants sufficient time to prepare. Consider using tools like Architecture Decision Records (ADRs) or other documentation formats to provide comprehensive context before the meeting. Step 3: Foster Interaction To maximize efficiency, encourage collaborative discussions before the scheduled meeting. Share the ADR or relevant documentation with the team and allow them to engage in discussions, provide feedback, and ask questions. This approach ensures that everyone enters the meeting with a clear understanding of the topic and can prepare with relevant references and insights. Step 4: Conduct a Focused Meeting When it’s time for the meeting, maintain a concise and focused approach. Limit the duration of the meeting to no longer than 45 minutes. This time constraint encourages participants to stay on track and make efficient use of the meeting. Avoid the trap of allowing meetings to expand unnecessarily, as per Parkinson’s law. Step 5: Conclusion and Next Steps After the meeting, clearly define the decision that has been made and summarize the key takeaways. If the discussion led to a decision, conclude the Architecture Decision Record or relevant documentation. If further action is needed, create a list of TODO activities and determine what steps are required to move forward. If additional meetings are necessary, return to Step 2 and schedule them accordingly based on the progress made. By following these key steps, software engineering teams can streamline their technical discussions, making them more efficient and productive while preserving valuable product development and innovation time. This approach encourages a culture of documentation and collaboration, enabling teams to make informed decisions and maintain institutional knowledge effectively. Conclusion In the fast-paced world of software engineering, efficient technical meetings play a crucial role, offering benefits such as time savings, improved decision-making, enhanced collaboration, reduced meeting fatigue, and knowledge sharing. To harness these advantages, a five-step methodology has been introduced emphasizing documentation, clear communication, and preparation. By adopting this approach, software engineering teams can balance collaboration and focused work, ultimately driving efficiency, innovation, and productivity.
Arun Pandey
|Accredited Investor| Enterprise Coach| Sr. TechLead| Topcoder Ambassador|
Otavio Santana
Award-winning Software Engineer and Architect,
OS Expert