What is the difference between AI and automation?
Automation uses rules and triggers to complete repetitive tasks without manual input. AI can make more complex decisions by analysing data, recognising patterns, and adapting to new information. We often combine the two for the best results.
How can nonprofits use AI responsibly and ethically with vulnerable audiences?
We recommend being transparent about where and how AI is used, keeping a human-led approach for sensitive decisions, and putting in place strict privacy and safeguarding policies.
How do we protect our community's data?
We design AI and automation systems to comply with GDPR and relevant data protection laws, and ensure data is stored and processed securely - with minimal collection of personally identifiable information where possible.
Can chatbots support vulnerable or non-native English speakers?
Yes - with careful design. This may include multilingual support, plain-language content, clear escalation to a human, and training the AI on culturally appropriate responses.
What is a good place to start using AI?
Most organisations see quick wins by automating repetitive admin tasks or piloting an AI assistant to answer frequently asked questions. This allows teams to test the technology in a low-risk way before expanding to more complex uses. We always recommend though to start with a free workshop we provide that can help you understand where your team is at, map out opportunities and discuss the risks and concerns.
How are nonprofits using AI chatbots to reach more people and improve service delivery?
Charities use AI chatbots to provide instant answers to common questions, guide people through forms or applications, offer basic triage for services, and direct users to the right resources. This can extend service hours, reduce staff workload, and improve access for people who may not feel comfortable calling or emailing.
What are the privacy and ethical concerns about using chatbots with sensitive audiences?
The main concerns are ensuring personal information is stored and processed securely, preventing misuse of data, and avoiding responses that could cause harm. Chatbots must be transparent about when the user is speaking to AI and give clear ways to connect with a human.
What are the potential ethical risks?
Risks include bias in AI responses, over-reliance on automated decisions, lack of accountability, and the erosion of trust if users do not know they are interacting with AI. We address these by combining AI with human oversight, testing for bias, and setting clear boundaries for what the AI should and should not do.
How do we maintain a human connection?
We design AI tools to complement, not replace, human interaction. This means keeping humans in the loop for sensitive or complex cases, offering easy escalation options, and making sure people feel heard and supported.
How can we be transparent about our use of AI?
We recommend clear labelling so users know when they are interacting with AI, publishing an AI use policy on your website, and explaining how data will be used. This helps maintain trust and meets ethical best practice. It’s also important to explain “why” you are using AI.