44 technical elements you probably haven’t thought of
It’s easy to feel excited by AI assistants ― we’re creating machines you can talk to, and systems that can retrieve and share information faster and at a vaster scale than we’ll ever be able to do with our human brains alone. Still, you have to force yourself to step back far enough to see the full scale of technology an AI assistant is built on.
The most advanced AI assistants have natural language processing (NLP), retrieval augmented generation (RAG) for specialist, company-specific training, easy integration with all your favourite apps and systems, a choice of LLM (with full administrative control), and a human in the loop to keep everything in check.
These are what we believe are the five non-negotiable things your AI assistant must have to be successful, but it doesn’t end there. You’ll also need to consider the management of AI assistant content, refreshing the data foundations it rests on, and know when to use contrasting tools like RAG and API as appropriate.
There are a further 44 things you might not have thought about too, to run an AI assistant successfully:
Scalability and reliability
1. Elasticity to scale your resources up or down as needed
2. Load balancing to distribute workloads evenly across resources
3. Stateless architecture to keep servers free of storing user session data
4. Fault tolerance to keep operating despite any failures
5. Mean Time Between Failures (MTBF) to increase reliability
6. Effective error handling for any unexpected challenges
Development and maintenance
7. Change control and rollback facility for smooth, effective changes to systems
8. Timely updates for LLM technology and other external technologies
9. Code management with version control systems and branching strategies
10. Continuous code reviews and bug fixes
11. Modular design to simplify updates and maintenance
12. Bespoke dialog engine with entity detection, digressions, and user feedback
User experience
13. A messenger window that can handle every type of response, from text and button links to multiple choice and follow-on questions
14. A preview mode and draft status to check the quality of your responses
15. Live chat options
16. Multilingual response capability
17. Consistent messaging for LLM responses (without active management, they’ll be random)
18. The option to manage multiple AI assistants for both external and internal users
API integration
19. Understand API protocols (REST, SOAP) and data contracts
20. Implement OAuth, API keys, or other security measures
21. Define strategies for API failures and exception management
22. Consider API Rate Limits and plan accordingly for any limitations
Security
23. Data encryption and redaction to protect personally identifiable information (PII)
24. Two-factor authentication (2FA) for secure access to the AI platform
25. Granular role-based access control (chosen by an admin)
26. 256-bit AES encryption at rest and TLS 1.2 for encrypting data in transit
27. Principal of least privilege adopted throughout your platform
28. SOC 2 compliance
29. Automatic DDoS protection
30. Security coding practices including the OWASP Top 10
31. End-to-end and continuous testing of all your technology systems
32. Protection against common threats
33. Protection against jailbreaking prompts (anyone deliberately trying to do harm)
34. Penetration testing
35. Compliance audits
36. Authentication
37. Authorisation for different levels of platform access with an audit trail of users and their actions
38. Regular audits of your systems by independent third parties
Data and performance
39. The option to review chat logs (with redacted personal information) to track AI assistant performance and evolve the instruction its given
40. Comprehensive reporting on all AI assistant activity to consistently improve performance
41. The ability to improve specific automations and export the related data
42. The ability to only collect and store necessary data plus get consent to use it
Other
43. Browser compatibility
44. Secure environment for building, testing and deploying AI models successfully and with full privacy
Besides this non-exhaustive list of necessary elements that grows all the time, you’ll still need to prepare for events that happen outside your control too:
- If your most experienced AI enthusiast leaves the company, what happens then? Who do you turn to for expert guidance, and are you prepared for this?
- Or if you rely on a particular AI technology and a data law change makes it ineffective, what then ― are your practices AI agnostic, so you can quickly switch to another provider?
All these technological, ethical, and economical factors need urgent focus before you even think about starting to build an AI assistant. That’s why it’s easier to hire in a team already set up to handle it all and ready to run with production.