The Federal Government Has an AI Plan. Here's Why It Matters for Our Community.

The Government of Canada just released its first-ever AI Strategy for the Federal Public Service. It's a two-year plan for how the federal government will start using artificial intelligence across its departments and programs.

You might be wondering: what does a government tech strategy have to do with us?

More than you'd think. Here's what we're paying attention to.

Our Communities Are Centered in This Strategy

The government isn't just thinking about making its own offices run faster. It's explicitly committed to making sure AI doesn't leave people behind,especially those who already face barriers accessing services.

That includes the people our organizations work with every day.

The strategy calls for ongoing engagement with equity-deserving groups, marginalized communities, and people with disabilities in the design of AI-powered services. The grassroots organizations, charities and nonprofits, in our network are closest to these communities. We have knowledge and relationships that government departments need if they're serious about getting this right.

The Government Will Be Looking for Community Partners

As departments start building and testing AI tools, they're required to meaningfully engage the publicespecially communities most affected by these systems.

They're going to need trusted intermediaries to help them do that.

As a network organization, we already play that bridging role. We connect institutions to community voices. We can bring member perspectives into government consultations, help shape how AI tools are tested, and make sure the people most impacted actually have a say. That's a role worth stepping into intentionally.

Funding Expectations Are About to Change

The strategy directs every federal department to start identifying programs and services that could be transformed by AI and to include AI planning in budget proposals going forward.

That change will ripple out to the community sector. Grant-makers and program officers will increasingly expect to see some awareness of AI in the proposals they review.

We want to make sure our members aren't caught off guard by that shift. Part of our job is to help organizations in our network understand what's coming and feel prepared to engage with it on their own terms.

We Need to Watch Out for Our Communities

The strategy itself acknowledges that AI should not be used for decisions about social service eligibility, hiring, or criminal justice because the risk of harm and bias is too high.

Many of the people our members serve are directly affected by exactly those systems.

We have a responsibility to pay close attention to how government AI gets deployed in practice, to amplify community concerns when they arise, and to advocate clearly when things aren't going the way they should. That's part of what it means to be a network that the social impact community can trust.

The Bottom Line

This strategy creates real opportunities for advocacy, for programming, for partnerships, and for funding. But those opportunities will go to organizations that show up early and speak on behalf of the community sector.

That's what we're here for.

We'll be keeping a close eye on how this strategy unfolds and sharing updates with our network. If you have thoughts, concerns, or questions about what AI means for your organization, we'd love to hear from you.

Let's figure this out together.

Article type: 
News
News Topic: 
Advocacy and Awareness
ChangeTheWorld
Collaboration
Legislation
Publications & Reports

Whether you're looking for volunteer opportunities,
networking events or a job in the nonprofit sector, we can help.

VolunteerEventsJobs