Israeli cybersecurity firm Bonfy.AI raises $9.5m seed funding
Bonfy.AI, an Israeli cybersecurity startup, has raised US$9.5 million in seed funding.
The round was led by TLV Partners and supported by Saban Capital Group.
Based in Tel Aviv, Bonfy.AI aims to mitigate risks related to generative AI content, such as intellectual property exposure, privacy breaches, and regulatory violations.
The company was founded in early 2024 by Gidi Cohen and Danny Kibel, both experienced in cybersecurity.
The new funding will be allocated to enhance the startup’s capabilities amid growing concerns about data governance and content security.
.source-ref{font-size:0.85em;color:#666;display:block;margin-top:1em;}a.ask-tia-citation-link:hover{color:#11628d !important;background:#e9f6f5 !important;border-color:#11628d !important;text-decoration:none !important;}@media only screen and (min-width:768px){a.ask-tia-citation-link{font-size:11px !important;}}🔗 Source: Calcalist
Bonfy.AI’s emergence reflects a broader industry shift where traditional cybersecurity approaches are being reimagined specifically for AI-generated content.
The cybersecurity startup landscape shows a clear trend toward AI-focused security solutions, with companies like Lakera AI and Prompt Security also developing specialized tools for AI governance and protection 1.
This evolution is necessary as AI capabilities expand. Current security infrastructure was designed for human-generated content and structured data flows, not the unique risks of generative AI technologies.
The rise of these specialized solutions indicates that enterprises are recognizing AI content security as a distinct challenge requiring purpose-built tools rather than adaptations of existing security frameworks.
Bonfy’s targeting of healthcare, finance, and legal sectors aligns with regulatory trends that place additional burdens on these industries when adopting AI.
Healthcare organizations implementing AI face particularly complex requirements, with the EU AI Act classifying most healthcare AI tools as “high-risk” and imposing compliance deadlines throughout 2025 2.
Financial institutions, already among the most targeted sectors for cyberattacks, must now contend with both existing financial regulations and emerging AI governance frameworks 3.
The integration of AI in these regulated environments creates multi-dimensional compliance challenges where content security becomes a critical component of regulatory readiness.
This regulatory complexity helps explain why Bonfy is prioritizing these sectors—they face not only technical security challenges but also significant legal and reputational risks from AI content mismanagement.
While much focus remains on external threats, Bonfy’s approach to content security acknowledges the evolving nature of insider risks in the AI era.
Research indicates that a significant portion of successful cyber attacks involve insider threats, with authorized users potentially exposing sensitive information through AI tools 4.
The expansion of AI tools within organizations creates new vectors for accidental data exposure, as employees may inadvertently input sensitive information into generative AI systems without proper guardrails.
This challenge is particularly acute in environments like healthcare, where patient data requires strict protection, yet clinical staff increasingly rely on AI tools to improve efficiency 5.
Security solutions must now consider not just malicious data exfiltration but also the unintended consequences of legitimate AI use—a shift that explains Bonfy’s focus on behavioral context when assessing content risk.
……Read full article on Tech in Asia
Technology
Comments
Leave a comment in Nestia App