Category: DataDecisionMakers

    Auto Added by WPeMatico

    AIAI and MLDataDecisionMakersInfrastructure

    Agent autonomy without guardrails is an SRE nightmare

    João Freitas is GM and VP of engineering for AI and automation at PagerDuty

    As AI use continues to evolve in large organizations, leaders are increasingly seeking the next development that will yield major ROI. The latest wave of this ongoing trend is the adoption of AI agents. However, as with any new technology, organizations must ensure they adopt AI agents in a responsible way that allows them to facilitate both speed and security. 

    More than half of organizations have already deployed AI agents to some extent, with more expecting to follow suit in the next two years. But many early adopters are now reevaluating their approach. Four-in-10 tech leaders regret not establishing a stronger governance foundation from the start, which suggests they adopted AI rapidly, but with margin to improve on policies, rules and best practices designed to ensure the responsible, ethical and legal development and use of AI.

    As AI adoption accelerates, organizations must find the right balance between their exposure risk and the implementation of guardrails to ensure AI use is secure.

    Where do AI agents create potential risks?

    There are three principal areas of consideration for safer AI adoption.

    The first is shadow AI, when employees use unauthorized AI tools without express permission, bypassing approved tools and processes. IT should create necessary processes for experimentation and innovation to introduce more efficient ways of working with AI. While shadow AI has existed as long as AI tools themselves, AI agent autonomy makes it easier for unsanctioned tools to operate outside the purview of IT, which can introduce fresh security risks.

    Secondly, organizations must close gaps in AI ownership and accountability to prepare for incidents or processes gone wrong. The strength of AI agents lies in their autonomy. However, if agents act in unexpected ways, teams must be able to determine who is responsible for addressing any issues.

    The third risk arises when there is a lack of explainability for actions AI agents have taken. AI agents are goal-oriented, but how they accomplish their goals can be unclear. AI agents must have explainable logic underlying their actions so that engineers can trace and, if needed, roll back actions that may cause issues with existing systems.

    While none of these risks should delay adoption, they will help organizations better ensure their security.

    The three guidelines for responsible AI agent adoption

    Once organizations have identified the risks AI agents can pose, they must implement guidelines and guardrails to ensure safe usage. By following these three steps, organizations can minimize these risks.

    1: Make human oversight the default 

    AI agency continues to evolve at a fast pace. However, we still need human oversight when AI agents are given the  capacity to act, make decisions and pursue a goal that may impact key systems. A human should be in the loop by default, especially for business-critical use cases and systems. The teams that use AI must understand the actions it may take and where they may need to intervene. Start conservatively and, over time, increase the level of agency given to AI agents.

    In conjunction, operations teams, engineers and security professionals must understand the role they play in supervising AI agents’ workflows. Each agent should be assigned a specific human owner for clearly defined oversight and accountability. Organizations must also allow any human to flag or override an AI agent’s behavior when an action has a negative outcome.

    When considering tasks for AI agents, organizations should understand that, while traditional automation is good at handling repetitive, rule-based processes with structured data inputs, AI agents can handle much more complex tasks and adapt to new information in a more autonomous way. This makes them an appealing solution for all sorts of tasks. But as AI agents are deployed, organizations should control what actions the agents can take, particularly in the early stages of a project. Thus, teams working with AI agents should have approval paths in place for high-impact actions to ensure agent scope does not extend beyond expected use cases, minimizing risk to the wider system.

    2: Bake in security 

    The introduction of new tools should not expose a system to fresh security risks. 

    Organizations should consider agentic platforms that comply with high security standards and are validated by enterprise-grade certifications such as SOC2, FedRAMP or equivalent. Further, AI agents should not be allowed free rein across an organization’s systems. At a minimum, the permissions and security scope of an AI agent must be aligned with the scope of the owner, and any tools added to the agent should not allow for extended permissions. Limiting AI agent access to a system based on their role will also ensure deployment runs smoothly. Keeping complete logs of every action taken by an AI agent can also help engineers understand what happened in the event of an incident and trace back the problem.

    3: Make outputs explainable 

    AI use in an organization must never be a black box. The reasoning behind any action must be illustrated so that any engineer who tries to access it can understand the context the agent used for decision-making and access the traces that led to those actions.

    Inputs and outputs for every action should be logged and accessible. This will help organizations establish a firm overview of the logic underlying an AI agent’s actions, providing significant value in the event anything goes wrong.

    Security underscores AI agents’ success

    AI agents offer a huge opportunity for organizations to accelerate and improve their existing processes. However, if they do not prioritize security and strong governance, they could expose themselves to new risks.

    As AI agents become more common, organizations must ensure they have systems in place to measure how they perform and the ability to take action when they create problems.

    Read more from our guest writers. Or, consider submitting a post of your own! See our guidelines here.

    AIAI and MLDataDecisionMakersTechnology

    Hiring specialists made sense before AI — now generalists win

    Tony Stoyanov is CTO and co-founder of EliseAI

    In the 2010s, tech companies chased staff-level specialists: Backend engineers, data scientists, system architects. That model worked when technology evolved slowly. Specialists knew their craft, could deliver quickly and built careers on predictable foundations like cloud infrastructure or the latest JS framework

    Then AI went mainstream.

    The pace of change has exploded. New technologies appear and mature in less than a year. You can’t hire someone who has been building AI agents for five years, as the technology hasn’t existed for that long. The people thriving today aren’t those with the longest résumés; they’re the ones who learn fast, adapt fast and act without waiting for direction. Nowhere is this transformation more evident than in software engineering, which has likely experienced the most dramatic shift of all, evolving faster than almost any other field of work.

    How AI Is rewriting the rules

    AI has lowered the barrier to doing complex technical work, technical skills and it's also raised expectations for what counts as real expertise. McKinsey estimates that by 2030, up to 30% of U.S. work hours could be automated and 12 million workers may need to shift roles entirely. Technical depth still matters, but AI favors people who can figure things out as they go.

    At my company, I see this every day. Engineers who never touched front-end code are now building UIs, while front-end developers are moving into back-end work. The technology keeps getting easier to use but the problems are harder because they span more disciplines.

    In that kind of environment, being great at one thing isn’t enough. What matters is the ability to bridge engineering, product and operations to make good decisions quickly, even with imperfect information.

    Despite all the excitement, only 1% of companies consider themselves truly mature in how they use AI. Many still rely on structures built for a slower era — layers of approval, rigid roles and an overreliance on specialists who can’t move outside their lane.

    The traits of a strong generalist 

    A strong generalist has breadth without losing depth. They go deep in one or two domains but stay fluent across many. As David Epstein puts it in Range, “You have people walking around with all the knowledge of humanity on their phone, but they have no idea how to integrate it. We don’t train people in thinking or reasoning.” True expertise comes from connecting the dots, not just collecting information.

    The best generalists share these traits:

    • Ownership: End-to-end accountability for outcomes, not just tasks.

    • First-principles thinking: Question assumptions, focus on the goal, and rebuild when needed.

    • Adaptability: Learn new domains quickly and move between them smoothly.

    • Agency: Act without waiting for approval and adjust as new information comes in.

    • Soft skills: Communicate clearly, align teams and keep customers’ needs in focus.

    • Range: Solve different kinds of problems and draw lessons across contexts.

    I try to make accountability a priority for my teams. Everyone knows what they own, what success looks like and how it connects to the mission. Perfection isn’t the goal, forward movement is.

    Embracing the shift

    Focusing on adaptable builders changed everything. These are the people with the range and curiosity to use AI tools to learn quickly and execute confidently.

    If you’re a builder who thrives in ambiguity, this is your time. The AI era rewards curiosity and initiative more than credentials. If you’re hiring, look ahead. The people who’ll move your company forward might not be the ones with the perfect résumé for the job. They’re the ones who can grow into what the company will need as it evolves.

    The future belongs to generalists and to the companies that trust them.

    Read more from our guest writers. Or, consider submitting a post of your own! See our guidelines here.