Leesburg, Va.-based PrimeAI has closed a $750,000 seed round as it bids to use artificial intelligence responsibly and to directly impact, among other things, EBITDA.
CEO Aaron Burciaga, Chief Analytics Officer Jon Higbie and President Lance Kallman co-founded Prime AI in 2019. Burciaga is a data scientist and AI engineer with experience at Amazon Web Services, Booz Allen and several Fortune 500 companies. Higbie has a background in AI and machine learning, while Kallman has extensive marketing experience.
The five-year-old startup’s flagship product, called Interlace, promises to solve operational challenges and accelerate results across sales, marketing, operations, supply chain, and customer teams with AI engines and private chat.
The company calls AI engines a “collection of software tools that integrate AI and machine learning models to optimize workflows, automate tasks, overcome operational hurdles and achieve specific business objectives more efficiently.” Private Chat is described as advanced systems that utilize AI to understand and process natural language queries, enhancing search capabilities across data sources within an organization.
Commercial Real Estate
MacKenzie Companies
Advertising / Media / Communications / Public Relations
Nevins & Associates
Financial Services / Investment Firms
Chesapeake Corporate Advisors
Commercial Real Estate
Monday Properties
Venture Capital
Blue Delta Capital Partners
Internet / Technology
Foxtrot Media
PrimeAI also focuses on responsible and ethical use of AI, detailing various issues in a series of blog posts.
“Responsible AI is composed of autonomous processes and systems that explicitly design, develop, deploy and manage cognitive methods with standards and protocols for ethics, efficacy and trustworthiness,” Burciaga wrote. “Responsible AI can’t be an afterthought or a pretense.”
Burciaga also listed six essential elements of responsible AI, along with brief descriptions:
- Accountable: Algorithms, attributes and correlations are open to inspection.
- Impartial: Internal and external checks enable equitable application across all participants.
- Resilient: Monitored and reinforced learning protocols with humans produce consistent and reliable outputs.
- Transparent: Users have a direct line of sight to how data, output and decisions are used and rendered.
- Secure: AI is protected from potential risks (including cyber risks) that may cause physical and digital harm.
- Governed: Organization and policies clearly determine who is responsible for data, output and decisions.