Q&A with Steve Tcherchian, CISO and Chief Product Officer of XYPRO Technology

Steve Tcherchian is CISO and Chief Product Officer at XYPRO, a leading cybersecurity solutions company. He is on the ISSA CISO Advisory Board, the NonStop Under 40 executive board and is part of the ANSI X9 Security Standards Committee.

He is a regular contributor to and presenter at the EC-Council. With more than 20 years in the cybersecurity field, Steve is responsible for strategy, innovation and development of XYPRO’s security product line as well as overseeing XYPRO’s risk, compliance and security to ensure the best experience to customers in the Mission-Critical computing marketplace. www.xypro.com Steve works closely with XYPRO’s HR department to ensure that it too remains cybersafe, since it’s so crucial to the work performance of the company.

The company has 35 years of knowledge, experience and success providing HPE NonStop information systems security, risk management, and compliance to customers.

Why are DeepSeek threats so prominent?

Because they’re not just technical—they’re geopolitical!! DeepSeek is more than an AI model – it’s part of a broader movement toward digital sovereignty. It represents a shift in who controls AI intelligence. When a model is developed and governed by a nation like China that plays by a completely different set of rules, the risk isn’t just about model performance—it’s about trust, transparency, and intent. That’s what makes it so prominent. It forces us to rethink who gets to shape the future of AI.

Why makes them different from other cyberthreats?

Typical cyberthreats exploit known vulnerabilities. DeepSeek introduces unknowns. You’re not just dealing with a malicious payload or a phishing campaign—you’re embedding a foreign-trained intelligence layer into your systems. Think about that for a minute. The threat isn’t just in the code—it’s in the decisions the code makes. That’s a very different kind of risk. It’s subtle, embedded, and harder to detect. It blurs the lines between infrastructure and intelligence and threat

Should AI continue to be relied upon so heavily given the DeepSeek threat?

Yes……..but with guardrails. AI adoption is not slowing down—that ship has sailed. But we can be smarter about how we evaluate the models we adopt. We need transparency in training data, governance structures, and how much autonomy we give these systems. Blind trust in AI is the problem—not AI itself. People use it without understand the power and intention behind it. The key is balancing innovation with accountability.

Are DeepSeek threats a bad omen for future cybersecurity technology and advancements?

I wouldn’t call it a bad omen—I’d call it a wake-up call. DeepSeek is showing us where the blind spots are. It’s forcing the cybersecurity industry to evolve and address risks we didn’t have five years ago—like AI-origin risk, observability, and behavioral manipulation. It’s a catalyst. Like every other innovative technology, we either adapt or we get left behind.

What are some solutions to the DeepSeek threat?

Start with observability. You can’t secure what you can’t see. We need AI systems that are auditable—where decisions can be traced, explained, and challenged.

Second, treat AI origin the same way we treat supply chain security. Vet the model. Know where it came from. Understand how it was trained. And finally, don’t give AI more autonomy than it earns. It’s a tool—not a decision-maker.

Unfortunately, easier said than done. Someone is always going to push the envelope leaving these concerns by the wayside.

Will we be as concerned about these threats in five years?

Yes—and we’ll be concerned about even more advanced threats. AI isn’t going away. In five years, these models will be deeper, faster, and embedded everywhere into our daily lives. What will change is how we manage the risk. Hopefully by then, we’ve matured in how we evaluate, audit, and govern AI—not just build it faster. If we don’t, the concerns we have today will seem mild by comparison.