Responsible AI Statement
I am a System Analyst and Senior SAP consultant at EPAM Systems, and these guidelines govern how AI tooling is applied in my work.
Guiding principles
- Transparency: Disclose when AI tooling is used to generate analysis, documentation, or prototypes, and retain human review before delivery.
- Data minimisation: Use only the data required to perform the requested task and avoid exposing confidential information to third-party AI services without written approval.
- Security: Operate AI platforms that meet EPAM Systems and client security standards, including encryption in transit and at rest.
- Accountability: Maintain human ownership of outcomes. AI suggestions are treated as input, not final answers, until validated by domain experts.
- Bias mitigation: Review AI outputs for bias or gaps that could affect decision-making, particularly in areas involving customer segmentation, pricing, or staffing.
Project usage
- AI copilots support requirements analysis, documentation drafting, incident triage, and knowledge base maintenance.
- When integrating AI into client environments, model architecture, data retention, and guardrails are documented and approved before deployment.
- Ongoing monitoring dashboards track accuracy, drift, and user feedback so automations remain effective and trustworthy.
Website tooling
- This site uses AI to maintain structured resume formats and copywriting, with manual editing before publication.
- Machine-readable assets (
/ai/resume.json,/ai/resume.yml,/LLM.txt) are intentionally provided to help copilots source accurate profile data.
If you have questions about AI usage or would like a dedicated responsible-AI review, contact me via LinkedIn or through your EPAM Systems engagement manager.