Course: Business Professionalism
Before starting the Business Professionalism course, I approached stakeholder engagement instinctively - I knew who mattered in my organisations, but I had never formalised the process. Learning about stakeholder mapping frameworks, particularly the power-interest matrix, was a turning point in how I think about security leadership.
The specific skill I acquired was the ability to systematically categorise stakeholders based on their influence and interest, and then tailor communication strategies accordingly. In cybersecurity, this is critical: a board member needs to hear about risk in financial terms, while an engineering lead needs technical specifics. I had always understood this intuitively, but the framework gave me a structured method to plan these interactions rather than improvising each time.
This learning proved immediately useful in my role at Boosta. When I joined as CISO, I was responsible for implementing security across 31 business units - each with its own leadership, priorities, and level of security awareness. Rather than applying a uniform communication approach, I mapped stakeholders across the holding using the power-interest matrix. Unit heads with high operational risk exposure received detailed, data-driven briefings. Others who were less directly affected but held budget authority received concise, impact-focused summaries. This differentiated approach significantly reduced resistance to new security policies.
I have also applied this learning when preparing security investment proposals. Previously, I would present technical justifications - threat statistics, vulnerability counts, compliance gaps. Now I frame proposals around business outcomes: cost of potential breach, regulatory penalties avoided, operational continuity preserved. The stakeholder mapping exercise taught me that the strength of an argument depends entirely on who is listening.
Going forward, I plan to integrate stakeholder analysis into every major security initiative I lead, treating it as a mandatory planning step rather than an afterthought.
Course: Research Methods
As a cybersecurity professional, I have spent years making decisions based on experience, threat intelligence, and professional judgement. The Research Methods course challenged me to examine something I had taken for granted: the difference between knowing something from practice and being able to demonstrate it through rigorous evidence.
The most significant skill I acquired was understanding how to construct a defensible research design - selecting an appropriate methodology, aligning research questions with philosophical positions, and ensuring that findings are credible within a chosen framework. Specifically, learning about interpretivist epistemology and qualitative methods opened a new perspective for me. In cybersecurity, we default to quantitative thinking: metrics, dashboards, incident counts. But many of the most important questions in my field - why do security awareness programmes fail, how do CISOs actually make risk decisions - require qualitative exploration.
This learning directly shaped my research proposal on Security Awareness Training effectiveness. I designed the study as a qualitative, interpretivist investigation using semi-structured interviews with cybersecurity professionals. Six months ago, I would have instinctively reached for a survey with numerical scales. The Research Methods course taught me that the richness of professional experience cannot be captured in Likert items - it requires the depth that thematic analysis of interview data provides.
Beyond academia, this skill has changed how I approach problem-solving at work. When evaluating why a security policy is not being followed across Boosta's business units, I no longer assume the answer is non-compliance. Instead, I investigate: I ask questions, look for patterns, and consider contextual factors. This is, in essence, applied qualitative research - and it produces far better remediation strategies than simply tightening controls.
I intend to carry this evidence-based mindset into future security strategy work, ensuring that decisions are grounded in structured analysis rather than assumption alone.
Course: Business Professionalism
The CrowdStrike case study we examined in Business Professionalism was one of the most directly relevant exercises I have encountered in my academic career. Analysing how a cybersecurity company managed a major incident - and how its communication strategy affected corporate reputation - gave me a framework I could immediately apply to my own work.
The specific learning was understanding Situational Crisis Communication Theory (SCCT) and how the choice of response strategy (deny, diminish, rebuild) must align with the level of organisational responsibility and the severity of reputational threat. I also learned how stakeholder perception during a crisis is shaped not by technical facts alone, but by the timing, tone, and transparency of communication.
This was useful because, as a CISO, incident response is a core part of my role - but I had always focused on the technical dimension: containment, eradication, recovery. The Business Professionalism course forced me to consider the communication dimension with equal seriousness. A perfectly executed technical response means nothing if stakeholders lose confidence because they were informed too late, too vaguely, or in the wrong tone.
I have already applied this learning at Boosta. When we identified a significant access control gap during an internal audit, I drafted the communication to leadership using SCCT principles: I acknowledged the issue transparently, explained the root cause without deflecting, and presented a concrete remediation timeline. The response from management was constructive rather than adversarial - a direct result of framing the message correctly.
Moving forward, I plan to incorporate crisis communication planning into every incident response plan I develop. Technical playbooks will be paired with communication playbooks, ensuring that my teams are prepared to manage both the threat and the narrative simultaneously.
Course: IT Service Management
I entered the IT Service Management course with a bias: I viewed ITIL as a bureaucratic framework designed primarily for enterprise IT helpdesks, with limited relevance to cybersecurity. The course corrected this assumption thoroughly, and the shift in my understanding has had practical consequences for how I design security operations.
The key learning was understanding the ITIL v4 service value system - particularly the concept that every IT function, including security, exists to co-create value with the business. This reframing was significant. I had always positioned cybersecurity as a protective function: we prevent bad things from happening. ITIL taught me to articulate security as a value-enabling function: we create the conditions under which the business can operate confidently and grow.
This distinction proved immediately applicable. At Boosta, I had been struggling to justify the time and cost of implementing centralised identity management across 31 business units. When I reframed the proposal using service value language - describing IAM deployment not as "access control enforcement" but as "enabling secure, seamless collaboration across the holding" - the conversation with leadership shifted entirely. The project was approved with minimal friction.
I also applied ITIL's continual improvement model to my SOC processes. Rather than treating playbooks as static documents, I introduced a quarterly review cycle where detection rules, escalation paths, and response procedures are evaluated against recent incident data. This is essentially the ITIL feedback loop applied to security operations, and it has measurably improved our detection accuracy.
The ITSM course taught me that frameworks are not constraints - they are communication tools. Speaking the language of service management allows security leaders to be heard in conversations where purely technical arguments fail.
Course: Information Security Management
The Information Security Management course introduced a level of strategic thinking about security that I had not formally engaged with before, despite years of hands-on experience. The most valuable learning was understanding the relationship between security governance, risk management, and organisational culture - and recognising that technical controls are only effective when embedded within a coherent governance structure.
Specifically, working on the security analysis report for the New Zealand financial services sector pushed me to think beyond individual vulnerabilities and consider systemic risk. I had to assess how regulatory requirements (such as the NZISM and the RBNZ's guidance on cyber resilience), organisational risk appetite, and industry-specific threat landscapes interact to shape an effective security posture. This systems-level analysis was a new discipline for me - in my professional career, I had always worked from the technical layer upward, not from the governance layer downward.
This learning has been directly useful in my current role. At Boosta, the absence of any security governance meant I had to build not just technical controls, but an entire normative framework: policies, audit procedures, risk registers, and compliance mapping. The ISM course gave me the conceptual foundation to do this systematically rather than ad hoc. I structured Boosta's framework around NIST CSF, but the decision to choose that framework - and how to adapt it for a decentralised holding structure - was informed by the governance thinking I developed in this course.
Going forward, I plan to use this governance-first approach in every new security engagement. Technical deployment without governance context is reactive and fragile. The ISM course convinced me that durable security begins with policy, risk appetite, and stakeholder alignment - the technology comes after.
During a group presentation exercise in the Information security management course, I received feedback from three peers that highlighted both strengths and areas for improvement in my communication style. The exercise required us to evaluate each other's presentation delivery, teamwork contributions, and professional presence.
The feedback I received was largely consistent across all three evaluations. My peers noted that I bring strong subject-matter authority to discussions - when I speak about cybersecurity topics, my experience is immediately evident, and this lends credibility to my arguments. They also commented that I am well-organised and prepared, rarely speaking without a clear point to make.
However, two of the three evaluators raised a similar concern: I tend to assume that my audience shares my technical baseline. One peer noted that during our group work, I sometimes used cybersecurity terminology without pausing to check whether others understood. Another observed that my presentation slides were content-dense, which made them effective as reference material but challenging to follow during a live delivery.
This feedback was valuable because it confirmed something I had suspected but not confronted directly: my communication defaults are calibrated for security professionals, not for general business audiences. In my current role at Boosta, this has real consequences - I present to stakeholders whose expertise is in marketing, product development, and finance, not in security operations.
Since receiving this feedback, I have made two concrete changes. First, I now structure presentations with a "no jargon" first pass - I draft slides for clarity, then review them as if the audience has no security background. Second, I have started using analogies and business-impact framing instead of technical descriptions when presenting risk scenarios to non-technical stakeholders. These adjustments are ongoing, but the peer feedback was the catalyst that made them conscious and deliberate.