top of page
vecteezy_abstract-background-modern-technology-banner-digital_7210339_299.jpg

2024
3rd Annual.

North Carolina Cybersecurity Symposium

 

SESSIONS

Registration Button.png
DET-AGENDA_button.png
greybckgd2.png

THURSDAY, FEBRUARY 22

FRIDAY, FEBRUARY 23

Check-in: 8:00 – 8:50

Welcome: 8:50 – 9:15

Keynote:   9:15 – 10:00

                  1:00 – 1:45

                  4:15 – 5:00

 

MORNING SESSIONS

  • 10:30 – 11:15

  • 11:30 – 12:15

AFTERNOON SESSIONS

  •  2:15 – 3:00

  •  3:15 - 4:00

NETWORKING EVENT

  • 5:00 - 6:30

​

Check-in: 8:00 - 9:00

 

 

 

 

 

MORNING WORKSHOP / SESSIONS

  • 9:00 to Noon

AFTERNOON WORKSHOP / SESSIONS

  •  1:00 to 4:00

KEYNOTE SPEAKERS

​

MORNING

Ms. Cynthia Kaiser, Deputy Assistant  Director for the FBI Cyber Division

AFTER LUNCH

Dr. Jeff Crume from IBM

AFTERNOON

Cathy Olieslaeger, Journey Into Cybersecurity Podcast Host

​

SESSION TITLES

​

​

ADVANCING DATA LOSS PREVENTION WITH NEURAL NETWORKS: DETECTING & PRIORITIZING INCIDENTS
Samuel Cameron

Advancing Data Loss Prevention with Neural Networks: Detecting and Prioritizing Incidents 
In an era where data is the most valuable asset, safeguarding it from any potential leaks or breaches is crucial. Leveraging my experience in AI and Data Loss Prevention (DLP), I have developed a unique neural network model capable of detecting DLP incidents, and more importantly, classifying high-priority ones.

This talk will walk through the journey of extending the functionality of our existing, paid DLP tools to a more sophisticated, AI-driven approach. The primary focus is to demonstrate how this innovative model significantly cuts down the noise by efficiently prioritizing incidents.

I will provide an in-depth exploration of the general architecture used in designing this model, making it replicable for the audience. Attendees will gain actionable insights and practical knowledge on how to enhance their own DLP strategies using AI, ultimately contributing to more secure and reliable data protection.

Join me as I share my insights and experiences, aiming to drive the evolution of DLP strategies through the integration of AI and neural networks.


ADVERSARIAL AI: LYING CHATBOTS, DEEP FAKES & MORE

Jeff Crume

This presentation will explore the potential dangers of adversarial AI, lying chatbots, and deep fakes. We will discuss how these technologies are becoming more sophisticated and how they can be used to deceive people, spread disinformation, and even cause harm. Through real-world examples and demonstrations, we will explore the implications of these technologies for society in order to gain a better understanding of these emerging technologies and the risks they pose.

 [Note: This description was written by ChatGPT]


AUTOMATING IDENTITY GOVERNANCE – ACCENTURE & SAILPOINT
Chad Rychlewski
 

Identity governance is known to be a very manual and time-consuming "check the box" exercise for management. This can lead to security gaps from unauthorized access to segregations of duties violations. With advancements in AI, we can now automate not only access reviews but every aspect of the identity lifecycle.


BECOMING A HACKER
Chris McCow   |   Omar Santos

Becoming a Hacker is an intensive boot-camp providing insight and real-world examples into the techniques used to bypass, evade and exploit vulnerabilities. This class will cover a range of vulnerabilities and weaknesses starting from basic network reconnaissance, service and vulnerability enumeration, initial and secondary exploitation, evasion and exploit writing.

case study: launching & running a successful bug bounty program
Chuck Kesler

Case Study: Launching and Running a Successful Bug Bounty Program 
This case study session will provide an overview of lessons learned from implementing a private bug bounty program to create an ongoing penetration testing program by engaging the worldwide community of independent security researchers. It will reference a CISO's experience with launching the program, the value that was seen from the program, and some of the challenges that they encountered along the way. This session will be applicable to a wide range of organizations - SMBs and large enterprises can equally benefit from running bug bounty programs. The session will conclude with a summary of steps that attendees can take to launch their own program. It will also cover how students and other aspiring penetration testers can become bug bounty hunters.


CISO'S SHARING CYBERSECURITY WISDOM
Aaron Lancaster  |   Chuck Kesle
|  Rob Main  |  Michael GarvinI
 

Join this group of cybersecurity leaders as they share their experiences - good and bad. This is your chance to learn from the best!


CLOUD SECURITY GAME DAY
Maria Thompson   |   Ken Allen

The Cloud Security Game Day focuses on having users implement security solutions to prevent common hacking attacks. These include open access to databases, internal deletion of S3 objects, and SQL injection attacks. Users will also have to setup and manage logging and monitoring to discover how the attacks are happening and by whom. The GameDay is a collaborative learning exercise that tests skills in implementing AWS solutions to solve real-world problems in a gamified, risk-free environment. This is a completely hands-on opportunity to explore AWS services, architecture patterns, best practices, and group cooperation under minimal guidance. We step outside the boundaries of typical workshops through open-endedness and ambiguity. The lighthearted competition and entertainment, coupled with non-prescriptive tasks, are some of GameDay’s unique attributes that make it a fun and memorable learning experience.


CYBER THREAT INTELLIGENCE
Brian Torres

This poster session highlights the efforts of cybersecurity capstone students at UNC Pembroke who are creating synthetic cyber knowledge graphs. These graphs aim to improve algorithms capable of extracting patterns and trends from datasets, ultimately enhancing anomaly detection predictions.




DEBUNKING ZERO TRUST & IT'S RELEVANCE TO CYBERSECURITY
Srinivasan Vanamali

Zero Trust is being portrayed as the panacea for all ills related to cybersecurity and the new “tool” in the CISOs toolkit. Whilst at its core Zero Trust has its merits, it’s often used by the product vendors to promote their products and solutions. The recent publications by NIST SP 800-207 approaches principles of zero trust through an Zero Trust Architecture (ZTA) framework with an identity and access management construct and keeps the discussion at a technology level. In this session, we will debunk the Zero Trust misconceptions, its limitations and value propositions.


DEVPHISH: EXPLORING SOCIAL ENGINEERING IN SOFTWARE SUPPLY CHAIN ATTACKS ON DEVELOPERS
Sima Jafarikhah

The Software Supply Chain (SSC) has captured
considerable attention from attackers seeking to infiltrate systems and undermine organizations. There is evidence indicating that adversaries utilize Social Engineering (SocE) techniques specifically aimed at software developers. That is, they interact with developers at critical steps in the Software Development Life Cycle (SDLC), such as accessing Github repositories, incorporating code dependencies, and obtaining approval for Pull Requests (PR) to introduce malicious code. This talk explore the existing and emerging SocE tactics employed by adversaries to trick Software Engineers (SWEs) into delivering malicious software. By analyzing a diverse range of resources, which encompass established academic literature and real-world incidents, this talk systematically presents an overview of these
manipulative strategies within the realm of the SSC. Such insights
prove highly beneficial for threat modeling and security gap
analysis.


EMPOWERING WOMEN IN CYBERSECURITY: EXPERIENCES & INSIGHTS
Lisa Bradley  |   Anthea Gonzales   |   Sphurthi Annamraju   |   Aliyana Isom 

This panel discussion gathers remarkable women from diverse backgrounds within the cybersecurity realm. These experts converge to share their personal journeys, insights, and invaluable advice with the goal of fostering a more inclusive and empowered cyber landscape.

The session unfolds with concise self-introductions, paving the way for an engaging panel. Through candid narratives, the panelists delve into their cybersecurity origins, sharing pivotal moments and shedding light on their individual paths within this dynamic field.

However, this panel isn't one-sided. Embracing a reverse panel format, the session transcends the traditional by inviting the audience to actively participate. In a spirited exchange, attendees share their experiences, adding depth and breadth to the conversation. Questions such as the challenges faced, sources of motivation, and personal insights are explored during this interactive segment, allotting time for a rich exchange of perspectives.

The session culminates with a succinct Q&A, allowing the audience to glean further wisdom from the panelists. A set of thoughtful questions guide this segment, exploring pivotal aspects such as initial forays into cybersecurity, overcoming challenges unique to women in the field, the impediments preventing more female participation, company support structures for women in cybersecurity, and crucial advice that could have steered careers differently if received earlier.

Ultimately, the panel aims to inspire, equip, and empower women contemplating or entrenched in the cybersecurity arena. While its primary focus rests on women, the collective wisdom and insights shared by these accomplished professionals transcend gender, providing a reservoir of knowledge for all aspiring cybersecurity enthusiasts. This initiative aspires not only to encourage more women to join but also to bolster the solidarity among existing female practitioners while advocating for a more inclusive cyber ecosystem.


ENABLING DEVELOPERS, PROTECTING USERS: INVESTIGATING HARRASSMENT & SAFETY IN VR – VIRTUAL REALITY
Abhinaya SB

Virtual Reality (VR) is an emerging technology that enables users to partake in 360-degree virtual experiences using VR head-mounted displays. VR offers full-body tracking and synchronous voice chat and has controllers that provide haptic feedback, allowing people to interact in newer, more immersive ways compared to traditional social media. While VR presents these novel affordances, it also lowers the bar for unwanted behavior by malicious social actors. The anonymity it provides to users, as well as the lack of their physical presence, not only increases the likelihood of harassment but also makes identification of harassers challenging. While online harassment is not a new issue, the unique sense of embodiment and presence that VR enables, even without haptic technology, poses distinct challenges when it comes to addressing harassment in the VR environment. VR-based harassment may include virtual violence, virtual groping, and haptic sex crimes. To enable users to deal with harassment, VR applications have introduced safety controls such as the personal bubble, power gesture, safe zone, etc. However, the set of safety controls is not standardized across VR apps, with high variance in functionalities they provide. 

With VR touted to be the next big thing, with use cases beyond gaming such as education, training and socialization, we investigate harassment and safety in VR spaces. We conduct a multi-perspective study on VR safety by interviewing targets of VR-based harassment and VR developers. (i) We identify contexts where existing VR safety controls and moderation practices are non-usable and ineffective. For instance, VR users face usability challenges in finding users in crowded virtual spaces for the purpose of blocking them. Safety controls are also ineffective in providing feedback to the harassers. (ii) We highlight VR users’ expectations for making VR safer and contrast them with technical, legal, and financial challenges that VR developers perceive in implementing them. Users desire live moderation in social spaces and want users’ behavior to be tracked across VR apps; however, VR developers highlight difficulties in deploying live moderation at scale and the privacy risks in tracking users. (iii) We use our findings from this multi-perspective study to make recommendations to VR platform owners, app developers, and policy makers for improving safety in VR.


EXTRACTING INSIGHTS FROM MIILLIONS OF ROBOCALLS
Sathvik Prasad

Automated bulk phone calls, or robocalls, have become a nightmare for both consumers and service providers. Endless waves of deceitful robocalls have frustrated phone users. Regulatory bodies are inundated with complaints about illegal robocalls. In an effort to combat unlawful robocalls, enforcement agencies, and regulators are taking stringent actions against carriers, gateway providers, and call originators through “cease and desist” letters and hefty fines. Yet, stopping illegal robocalls is no easy feat. Carriers, regulators, and anti-robocall product vendors lack the tools to extract insights from robocall audio content. While call metadata (CDR and signaling) is readily available to these stakeholders, there are hardly any tools to investigate robocall audio content at the vast scale required.

In this talk, we will present SnorCall, a framework that provides a scalable and efficient solution for analyzing robocall audio content at scale. By analyzing millions of robocalls collected over two years, SnorCall has allowed us to uncover critical insights into the world of robocalling. Among its many findings, SnorCall has enabled us to quantify the prevalence of different scam and legitimate robocall topics, determine the organizations referenced in these calls (including brand impersonations), estimate the average amounts solicited in scam calls, identify shared infrastructure between campaigns, and monitor the rise and fall of election-related political calls. SnorCall uses novel audio fingerprinting techniques to identify robocalling campaigns that use identical or nearly-identical call audio in their schemes. The framework also employs a semi-supervised labeling system that enables domain experts to write simple labeling functions to classify robocalls accurately. 

Our work “Diving into Robocall Content with SnorCall” was recently published at USENIX Security 2023. Our previous work on robocall characterization was published at USENIX Security in 2020 and won the Internet Defense Prize.

FEED YOUR CYBERCURIOSITY
Adrianne George

Feed Your Cybercuriousity: Exploring Careers in Cybersecurity 
In this workshop you will map your work style and motivators to potential cyber careers. After identifying careers that may be of interest and what long-term pathways you can pursue, we will explore how to position yourself to be successful in your cyber career search with interview best practices, resume keywords, and LinkedIn branding.


FREE CYBERSECURITY TOOLS SMACKDOWN
Amir Lawrence   |   Samuel Carter

Free Cybersecurity Tools Smackdown 
Ready to brush up on new and exciting technology apps, tips, tricks, and tools? Come join us for a “Smackdown.” In this fast-paced session, you are invited to share your latest finds in productivity lifesavers, web and networking tools, collaborative tools…any tech tip, trick, hack, or tool. This is an interactive session where everyone is encouraged to share their newest find in an “open mic” atmosphere…so come prepared to share and smackdown with your Cybersecurity peers.


HONEYDB HONEYPOT WORKSHOP
Phillip Maddux

For Explore the world of honeypots with the HoneyDB Honeypot Workshop. Honeypots, designed to unearth new threat insights and network intruders, can sometimes pose challenges with complex deployment processes. In response, the HoneyDB workshop offers an accessible and user-friendly solution for those intrigued by honeypots.

Whether you're a beginner or an enthusiast, this workshop provides a straightforward and uncomplicated approach to deploying your own honeypots. Join us to demystify the intricacies of honeypot implementation and gain hands-on experience in a hassle-free environment. Elevate your understanding of honeypots in cybersecurity with the simplicity and effectiveness of HoneyDB.

Workshop agenda:
- Intro to honeypots
- Discussion on Open source honeypots
- HoneyDB Overview
- HoneyDB Agent Overview
- Deploying the HoneyDB agent in the cloud
- Testing the HoneyDB agent
- Querying the Threat API
- HoneyDB CLI Python tool



INSIDER THREAT CASE ANALYSES
Ken Taitingfong

5 Insider Threat Case Studies, What Went Right, What Went Wrong, and Lessons Learned



JOHN LEGEND POSSIBLY STOLE MY IDENTITY: THE RISE OF AI, DEEP FAKES, VOICE CLONING, & IDENDITY THEFT
Jon Sternstein

While I do not believe that I look like John Legend, enough people have mistaken me for him that I have come to the conclusion that he may have stolen my identity. With today’s technology and connected world, it is easier than ever to take over a person’s identity. This presentation will discuss the world of of AI, deep fakes, voice cloning, and identity theft in a humorous, but informative story. We will also discuss all of the positive outcomes of this incredibly powerful technology and how it will help not only the cybersecurity field, but also the world.


LONGSHOT: INDEXING GROWING DATABASES USING MPC & DIFFERENTIAL PRIVACY
Yanping Zhang


In this talk, I will introduce Longshot, a novel design for secure outsourced database systems that supports ad-hoc queries through the use of secure multi-party computation and differential privacy. By combining these two techniques, we build and maintain data structures (i.e., synopses, indexes, and stores) that improve query execution efficiency while maintaining strong privacy and security guarantees.

As new data records are uploaded by data owners, these data structures are continually updated by Longshot using novel algorithms that leverage bounded information leakage to minimize the use of expensive cryptographic protocols. Furthermore, Longshot organizes the data structures as a hierarchical tree based on when the update occurred, allowing for update strategies that provide logarithmic error over time. Through this approach, Longshot introduces a tunable three-way trade-off between privacy, accuracy, and efficiency.

Our experimental results confirm that our optimizations are not only asymptotic improvements but also observable in practice. In particular, we see a 5x efficiency improvement to update our data structures even when the number of updates is less than 200. Moreover, the data structures significantly improve query runtimes over time, about ~10^3x faster compared to the baseline after 20 updates.

MICRO TRENDS IN CYBERSECURITY YOU MAY NOT HEAR ABOUT IN OTHER SESSIONS TODAY
Cathy Olieslaeger

The topics discussed in cybersecurity conferences usually center around the latest and greatest mouse traps, the newest shiny toy, AI and all that jazz. While cybersecurity and the underlying functionality keeps evolving and improving, the outcomes haven't changed. People and organizations get hacked. The bad guys win. 

Ever wonder why? Why do we keep trying the same thing and expect different results? Here are some insights to challenge your perspective and how you can lead your cybersecurity journey.


MAKE YOUR NETWORKS DANGEROUS!
Phillip Maddux

As a dedicated security practitioner, you've diligently implemented the fundamental controls, fortified your defenses, and matured your security program. Yet, in the ever-evolving landscape of cyber threats, a lingering question persists – is it enough? What if there's a pivotal layer in your defense-in-depth strategy that could elevate your security posture to unprecedented levels?

Imagine a scenario where your network becomes a perilous mine field for malicious actors, a place where their every move is fraught with risk. In this session, we unravel the enigma of the next layer in a comprehensive defense strategy – a layer that transforms your security approach from reactive to proactive.

This presentation is for security practitioners of all experience levels as it will cover a brief intro to honeypots and deception, determining when is the right time to implement deception, considerations and what to expect from deception, and a walk through several practical use cases for implementing deception. 

The overall goal of this presentation is to inform the audience on the basics of deception, and inspire the audience to raise the bar by including deception as part of their defense-in-depth strategy. It's time to redefine the rules of engagement on malicious actors and make your networks dangerous!

 

SOLVING THE TALENT SHORTFALL – BLENDING IT MANAGEMENT, CYBERSECURITY, & DATE SCIENCE

Neil Khatod

With the affordability of large data storage combined with advanced data analytics, we have seen a fundamental shift in the cyber fight as attackers have begun to use machine learning and analytics to attack networks. So how do we change to keep parity in the competition? To answer this, I discuss applying data science, data analytics, and AI to maintaining networks as well as cyber defense.

 

BUILDING A CYBERSECURITY GOVERNANCE, RISK, AND COMPLIANCE PROGRAM FROM THE GROUND UP
Laura Rodgers   |   Myriam Batista  |  Steve Cobb  |  Daniel Barber 

GRC programs help organizations take a comprehensive and integrated approach to managing risks. In cybersecurity, this means addressing not only technical vulnerabilities but also considering the broader organizational and regulatory landscape.

 

A GRC program for cybersecurity is essential for organizations to proactively manage risks, comply with regulations, and maintain a resilient and effective cybersecurity posture in an ever-evolving threat landscape.


PREPARING YOURSELF FOR A LEADERSHIP ROLE
Mardecia Bell   |   Donna Petherbridge   |   Colleen Brown

Have you been giving some thought to how to step into leadership roles as you move along in your career, whether at your own institutions or elsewhere? In this session, experienced leaders will give you an overview of specific steps that you can take to prepare for a leadership role and improve your leadership toolkit, including knowing your why, seeking mentors, sponsors and allies, and developing specific leadership skills and the leadership mindset you’ll need to be successful

This presentation will begin with a discussion of the difference between leadership and management. Then, experienced leaders will talk about specific actions that individuals can take to prepare for leadership roles, including:
1) Understanding your own values so you are able to draw on a values framework for decision making. Participants will do a values-identification activity.
2) Cultivating relationships with mentors, sponsors and allies. Presenters will describe the value of each of these types of relationships as well as how to identify those individuals. 
3) Taking advantage of, and then applying, leadership training and stretch assignment opportunities. Participants will receive suggestions for training to take and how to ask for stretch assignments. 
4) Getting to know the broader organization; e.g. your team is embedded in a context; leaders have to understand the context. Participants will gain strategies for stepping outside of their own organizations, including joining university wide committees and task forces. 
5) Improving your communication skills, including getting comfortable with uncomfortable conversations.
6) Letting go of operational tasks and spending time with others doing those tasks
7) Embracing the leadership role - some tips on what to do and focus on when you are in a leadership role.

We plan to engage participants with opportunities for sharing experiences through interactive activities throughout the presentation.



SECURE CODE REVIEW – JUICE SHOP
Joshua Beck

OWASP’s Juice Shop is one of the most famous insecure web applications around. You may have heard of it; you may have even spent significant time hacking it. But have you ever dug deeper? Have you ever looked under the hood at what makes it so insecure? 
Join Joshua Beck, a Staff Application Security Engineer with John Deere, as he dives head first into the insecure and fruit scented waters of the Juice Shop: walking through the code and comparing it to what the user sees on the front end, providing the audience a complete picture of the life cycle of a vulnerability through a target system.




THE IMPACT OF DEVSECOPS QUANTIFIED
Larry Maccherone

What if I could tell you the three application security practices whose adoption would most lower risk? What if I could also quantify the impact that each practice would have on your outcomes? Imagine being able to focus your entire organization (and your limited budget) on these three things rather than have your efforts spread across dozens of practices. Imagine how different the conversation with engineering teams and budget approvers will be if you can present research that shows just how important these three things are compared to other things you could invest in.

This talk is a presentation of research that quantifies the impact that various DevSecOps software security practices have on security risk outcomes. We have data from 200 different teams in the technologically and process diverse environment inside Comcast. We've tracked this data over time as teams have adopted practices like secure coding training, threat modeling, pen testing, SAST/IAST/SCA tool usage, security code review, etc. We have then correlated outcomes like network vulnerability to not only determine which practices have the most impact but to quantify how much of an impact each has.


TRANSFORMATION BLUEPRINT FOR DEVELOPER-CENTRIC APPLICATION SECURITY
Larry Maccherone


Transformation Blueprint for Developer-Centric Application Security 
The traditional approach to quality assurance (QA) was disrupted when the Agile movement caused most development teams to start taking at least partial ownership of the quality of their products and involved fundamental changes to mindset, terminology, tools, metrics, roles, and practices. The cloud-native and DevOps movements similarly disrupted traditional IT Ops.

Now it's security's turn, but here's the rub.

NIST, SANS, OWASP, PCI, etc. provide lists of candidate application security practices, but the items in the list are unprioritized, target security specialists, and fail to specify adaptations needed for a developer-first approach. Attempting to shift these practices left without proper consideration of modern development practices and priorities is a recipe for frustration, resistance, and false starts.

You will come out of this workshop with a Transformation Blueprint for accomplishing the cultural shift to developer-centric application security at your organization. The approach is derived from the program that Larry has used to accomplish this shift for over 600 development teams. Since Larry is a developer, writing code every day, his program is perfectly suited to the way development teams really want to work, rather than how security folks assume they work.

UNLOCKING THE NEXUS: SBOM, DEPENDENCY MANAGEMENT, AND THE POWER OF VEX
Lisa Bradley

This presentation intricately explores the interconnected facets of Software Bill of Materials (SBOM), dependency management, and the transformative influence of Vulnerability Exploitability eXchange (VEX). Its focus is to unveil their inherent connections and significance within the broader landscape of software security and vulnerability management. This exploration dives deep into pivotal realms, shedding light on:

• SBOM Essentials: Grasping the foundational role of SBOM, its contribution to software transparency, and its potential to fortify cyber ecosystems through meticulous inventory management.

• Dependency Management Dynamics: Untangling the intricate web of software dependencies, and unveiling strategies to navigate vulnerabilities, ensuring robust and secure software infrastructures.

• The VEX Impact: Immersing into the transformative prowess of VEX, scrutinizing its capacity to identify, understand, and mitigate vulnerabilities. 

This comprehensive presentation illuminates the individual strengths and collective potential of these critical elements, contributing to the creation of resilient and fortified software landscapes.



UNTRUST IDE: EXPLOITING WEAKNESSES IN VS CODE EXTENSIONS
Elizabeth Lin

With the rise in threats against the software supply
chain, developer integrated development environments (IDEs)
present an attractive target for attackers. For example, researchers have found extensions for Visual Studio Code (VS
Code) that start web servers and can be exploited via JavaScript
executing in a web browser on the developer’s host. This paper
seeks to systematically understand the landscape of vulnerabilities in VS Code’s extension marketplace. We identify a set of four sources of untrusted input and three code targets that can be used for code injection and file integrity attacks and use them to design taint analysis rules in CodeQL. We then perform an ecosystem-level analysis of the VS Code extension marketplace, studying 25,402 extensions that contain code. Our results show that while vulnerabilities are not pervasive, they exist and impact millions of users. Specifically, we find 21 extensions with verified proof of concept exploits of code injection attacks impacting a total of over 6 million installations. Through this study, we demonstrate the need for greater attention to the security of IDE extensions.


YOU'LL NEVER BE BORED IN THREAT RESEARCH:
BATTLING PIKABOT

Kelsey Merriman

The proliferation of sophisticated malware has posed exceptional challenges to the cybersecurity landscape with Pikabot emerging as a notable and evasive malware. I endeavor to provide a comprehensive and consumable analysis of the Pikabot malware utilizing a combination of threat intelligence, malware analysis, reverse engineering, and bot emulation. This research aims to allow cybersecurity and computer science students to glimpse into the ecosystem of a well-known threat actor, the capabilities of a sophisticated malware, the value of reverse engineering, the possibilities of devising detection using Python, and how anyone at any level can contribute to the battle.



ZERO TRUST THREAT MODELING
Chris Romeo

Zero trust is all the rage. Nevertheless, zero trust has vast implications for application security and threat modeling. Zero trust threat modeling means the death of the trust boundary. Zero trust security models assume attackers are in the environment, and data sources and flows can no longer be hidden. This uncovers threats never dreamed of in classic threat modeling.

Begin by laying a foundation of zero trust against the lens of application security. What does Zero Trust architecture mean as it reaches the top of the technology stack? Zero-trust architecture brings us back to when it was all about objects and subjects. The essence of zero trust is only allowing certain subjects to access particular objects.

Apply the concept of zero trust to threat modeling by understanding what changes with threat modeling in a zero-trust world and by considering a threat model of the zero-trust architecture. We'll explore using new design principles in a zero-trust threat model and introduce a mnemonic to help apply the major threats impacting zero trust to threat modeling and expose a new taxonomy of threats specific to the zero-trust application.

So long live the threat model, but say goodbye to the trust boundary.

​

​

Student Research SessionS

​

Full-privacy for account-based cryptocurrencies
Varun Madathil

Account-based cryptocurrencies such as Ethereum do not provide any privacy to the participants. In this talk, I will present existing techniques to achieve weaker notions of privacy, and also present our new protocol PriFHEte that achieves full privacy in this setting.

​

Identifying Root Causes of Security Advice Crises
Lorenzo Neil

The goal of online security advice is to inform everyday users of technology on how to manage the security of their computers and devices. Users are exposed to a sea of security advice and are unsure which advice is best suitable for them. Security experts also struggle to prioritize and practice advised behaviors, negating both the advice’s purpose and potentially their security. While the problem is clear, no rigorous studies have established the root causes of overproduction, lack of prioritization, or other problems with security advice. Without understanding the causes, we cannot hope to remedy their effects.

In this talk, we address these challenges by presenting our research into investigating the processes that authors follow to develop published security advice. In a semi-structured interview study with 21 advice writers, we asked about the authors’ backgrounds, advice creation processes in their organizations, the parties involved, and how they decide to review, update, or publish new content. Among the 17 themes we identified from our interviews, we learned that authors seek to cover as much content as possible, leverage multiple diverse external sources for content, typically only review or update content after major security events, and make few if any conscious attempts to deprioritize or curate less essential content. We recommend that researchers develop methods for curating security advice and guidance on messaging for technically diverse user bases and that authors then judiciously identify key messaging ideas and schedule periodic proactive content reviews. If implemented, these actionable recommendations would help authors and users both reduce the burden of advice overproduction while improving compliance with secure computing practices.

​

NOPE: A Game-Based Platform for Spear Phishing Awareness
Abdulrahman Aldkheel

Spear phishing, a prevalent cybersecurity threat, illicitly acquires sensitive information by exploiting human vulnerabilities through deceptive tactics. Considering the increasingly sophisticated nature of spear phishing attacks, awareness of them is crucial to empowering individuals with the skills and knowledge required to identify and defend against them. This research presents a novel game training system to address this challenge. Our proposed system consists of dual-gamified modules tailored for both attackers and recipients to provide a comprehensive understanding of phishing strategies and defensive measures. The game has several unique features, including a leaderboard scoring system to encourage competition and motivation, comprehensive game history and data tracking for individualized feedback, penalties for wrong actions, cue-level classification to improve detection skills, and personalized reinforcement that enables users to revisit and improve their weaknesses. The implementation of such game-based training systems has the potential to enhance cybersecurity education in the future and offer a scalable and effective countermeasure to phishing attacks. For future work, we will conduct an empirical study to evaluate the effectiveness of our proposed system to refine its components in light of user feedback and performance metrics to improve its usability and effectiveness.

​

Quantum Machine Learning
Chibuike Okekeogbu

Quantum machine learning is an emerging interdisciplinary field that combines machine learning and quantum computing to enhance data processing and problem-solving capabilities. This field explores the use of quantum phenomena for learning systems, the use of quantum computers for learning on quantum data, and the implementation of machine learning algorithms on quantum computers. Quantum machine learning has the potential to revolutionize computer science by speeding up information processing beyond existing classical speeds

​

Safeguarding Democratic Processes: Understanding and Enhancing Election Security
Jason Green

This presentation critically examines the vulnerabilities in election systems, particularly focusing on cyber threats within intricately networked structures. The presenter underscores the challenges in implementing common-sense approaches to improve election security and emphasizes the ongoing risks associated with both connected and indirectly connected systems. The presentation will go over current threats, future research directions, include the exploration of ballot marking devices, voter registration system security, and more.

​

Secure Data Forwarding In Cloud Storage

Bedeabasi John

My session will cover secure data forwarding in cloud storage. Data forwarding is analogous to network routing and possesses known vulnerabilities. Although there is zero record of exploitation of these vulnerabilities, cloud systems are prime attack targets and as such, should be closely observed for other hidden vulnerabilities.

​

Smart Farm Cybersecurity Framework: Artificial Intelligence, Internet of Things, Digital Twins
Mahsa Tavasoli

I will be presenting my research in poster format with the title: "Integrating Artificial Intelligence, Internet of Things, Digital Twins, and Enhanced Threat Modeling in the Smart Farm Cybersecurity Framework.

​

Stress Detection: Detecting, Monitoring, and Reducing Stress in Cyber-Security Operation Centers
Tiffany Davis-Stewart

Stress Detection:
Detecting, Monitoring, and Reducing Stress in Cyber-Security Operation Centers

​

ENABLING DEVELOPERS, PROTECTING USERS:  INVESTIGATING HARASSMENT & SAFETY IN VIRTUAL REALITY

Abhinaya S B

Virtual Reality (VR) is an emerging technology that enables users to partake in 360-degree virtual experiences using VR head-mounted displays. VR offers full-body tracking and synchronous voice chat and has controllers that provide haptic feedback, allowing people to interact in newer, more immersive ways compared to traditional social media. While VR presents these novel affordances, it also lowers the bar for unwanted behavior by malicious social actors. The anonymity it provides to users, as well as the lack of their physical presence, not only increases the likelihood of harassment but also makes identification of harassers challenging. While online harassment is not a new issue, the unique sense of embodiment and presence that VR enables, even without haptic technology, poses distinct challenges when it comes to addressing harassment in the VR environment. VR-based harassment may include virtual violence, virtual groping, and haptic sex crimes. To enable users to deal with harassment, VR applications have introduced safety controls such as the personal bubble, power gesture, safe zone, etc. However, the set of safety controls is not standardized across VR apps, with high variance in functionalities they provide.

With VR touted to be the next big thing, with use cases beyond gaming such as education, training and socialization, we investigate harassment and safety in VR spaces. We conduct a multi-perspective study on VR safety by interviewing targets of VR-based harassment and VR developers. (i) We identify contexts where existing VR safety controls and moderation practices are non-usable and ineffective. For instance, VR users face usability challenges in finding users in crowded virtual spaces for the purpose of blocking them. Safety controls are also ineffective in providing feedback to the harassers. (ii) We highlight VR users’ expectations for making VR safer and contrast them with technical, legal, and financial challenges that VR developers perceive in implementing them. Users desire live moderation in social spaces and want users’ behavior to be tracked across VR apps; however, VR developers highlight difficulties in deploying live moderation at scale and the privacy risks in tracking users. (iii) We use our findings from this multi-perspective study to make recommendations to VR platform owners, app developers, and policy makers for improving safety in VR.

​

UNTRUST IDE: EXPLOITING WEAKNESSES IN VS CODE EXTENSIONS
Elizabeth Lin

With the rise in threats against the software supply chain, developer integrated development environments (IDEs)present an attractive target for attackers. For example, researchers have found extensions for Visual Studio Code (VSCode) that start web servers and can be exploited via JavaScriptexecuting in a web browser on the developer’s host. This paperseeks to systematically understand the landscape of vulnerabilities in VS Code’s extension marketplace. We identify a set of four sources of untrusted input and three code targets that can be used for code injection and file integrity attacks and use them to design taint analysis rules in CodeQL. We then perform an ecosystem-level analysis of the VS Code extension marketplace, studying 25,402 extensions that contain code. Our results show that while vulnerabilities are not pervasive, they exist and impact millions of users. Specifically, we find 21 extensions with verified proof of concept exploits of code injection attacks impacting a total of over 6 million installations. Through this study, we demonstrate the need for greater attention to the security of IDE extensions.

​

LONGSHOT:
IN
DEXING GROWING DATABASES USING MPC & DIFFERENTIAL PRIVACY
Yanping Zhang

In this talk, I will introduce Longshot, a novel design for secure outsourced database systems that supports ad-hoc queries through the use of secure multi-party computation and differential privacy. By combining these two techniques, we build and maintain data structures (i.e., synopses, indexes, and stores) that improve query execution efficiency while maintaining strong privacy and security guarantees.

As new data records are uploaded by data owners, these data structures are continually updated by Longshot using novel algorithms that leverage bounded information leakage to minimize the use of expensive cryptographic protocols. Furthermore, Longshot organizes the data structures as a hierarchical tree based on when the update occurred, allowing for update strategies that provide logarithmic error over time. Through this approach, Longshot introduces a tunable three-way trade-off between privacy, accuracy, and efficiency.

Our experimental results confirm that our optimizations are not only asymptotic improvements but also observable in practice. In particular, we see a 5x efficiency improvement to update our data structures even when the number of updates is less than 200. Moreover, the data structures significantly improve query runtimes over time, about ~10^3x faster compared to the baseline after 20 updates.

​

Examining Cryptography and Randomness Failures in​ Open-Source Cellular Cores​

K. Virgil English

Industry is increasingly adopting private 5G networks to securely manage their wireless devices in retail, manufacturing, natural resources, and healthcare. As with most technology sectors, open-source software is well poised to form the foundation of deployments, whether it is deployed directly or as part of well-maintained proprietary offerings. This paper seeks to examine the use of cryptography and secure randomness in open-source cellular cores. We design a set of 13 CodeQL static program analysis rules for cores written in both C/C++ and Go and apply them to 7 open-source cellular cores implementing 4G and 5G functionality. We identify two significant security vulnerabilities, including predictable generation of TMSIs and improper verification of TLS certificates, with each vulnerability affecting multiple cores. In identifying these flaws, we hope to correct implementations to fix downstream deployments and derivative proprietary projects.

​

EXTRACTING INSIGHTS FROM MILLIONS OF ROBOCALLS
Sathvik Prasad

Automated bulk phone calls, or robocalls, have become a nightmare for both consumers and service providers. Endless waves of deceitful robocalls have frustrated phone users. Regulatory bodies are inundated with complaints about illegal robocalls. In an effort to combat unlawful robocalls, enforcement agencies, and regulators are taking stringent actions against carriers, gateway providers, and call originators through “cease and desist” letters and hefty fines. Yet, stopping illegal robocalls is no easy feat. Carriers, regulators, and anti-robocall product vendors lack the tools to extract insights from robocall audio content. While call metadata (CDR and signaling) is readily available to these stakeholders, there are hardly any tools to investigate robocall audio content at the vast scale required.

In this talk, we will present SnorCall, a framework that provides a scalable and efficient solution for analyzing robocall audio content at scale. By analyzing millions of robocalls collected over two years, SnorCall has allowed us to uncover critical insights into the world of robocalling. Among its many findings, SnorCall has enabled us to quantify the prevalence of different scam and legitimate robocall topics, determine the organizations referenced in these calls (including brand impersonations), estimate the average amounts solicited in scam calls, identify shared infrastructure between campaigns, and monitor the rise and fall of election-related political calls. SnorCall uses novel audio fingerprinting techniques to identify robocalling campaigns that use identical or nearly-identical call audio in their schemes. The framework also employs a semi-supervised labeling system that enables domain experts to write simple labeling functions to classify robocalls accurately.

Our work “Diving into Robocall Content with SnorCall” was recently published at USENIX Security 2023. Our previous work on robocall characterization was published at USENIX Security in 2020 and won the Internet Defense Prize.

LINK TO PAPER

​

CYBER THREAT INTELLIGENCE 
Brian Torres

This poster session highlights the efforts of cybersecurity capstone students at UNC Pembroke who are creating synthetic cyber knowledge graphs. These graphs aim to improve algorithms capable of extracting patterns and trends from datasets, ultimately enhancing anomaly detection predictions

​

Biometric Artificial Intelligence

Brianna Phifer

A discussion on the intersectionality of biometric security and AI

​

future of ai: artificial intelligence in the work force

Feyf Osman

The rapid advancement of artificial intelligence sparks both optimism and concern regarding its influence on the future workforce. Our research delves into the impact of AI on employment, examining the positive and negative implications of AI and the workforce. In our session, we will focus on the ethical considerations of AI algorithms, data security, and vulnerabilities in AI systems, and the requirements in a comprehensive ethical framework to regulate responsible AI utilization as it grows rapidly in the workforce. We will also explore how job displacement stands as a pivotal concern, as AI's automation capabilities threaten certain roles, particularly in manufacturing, customer service, transportation, and repetitive skill sets, and how the integration of AI also fosters new job opportunities by demanding expertise in data science, machine learning, and AI engineering

​

ARGUS: A FRAMEWORK FOR STAGED STATIC TAINT ANALYSIS OF GITHUB WORKFLOWS AaD ACTIONS DESCRIPTION
Greg Tystahl

Millions of software projects leverage automated workflows, like GitHub Actions, for performing common build and deploy tasks. While GitHub Actions have greatly improved the software build process for developers, they pose significant risks to the software supply chain by adding more dependencies and code complexity that may introduce security bugs. This talk presents ARGUS, the first static taint analysis system for identifying code injection vulnerabilities in GitHub Actions. We used ARGUS to perform a large-scale evaluation on 2,778,483 Workflows referencing 31,725 Actions and discovered critical code injection vulnerabilities in 4,307 Workflows and 80 Actions. We also directly compared ARGUS to two existing pattern-based GitHub Actions vulnerability scanners, demonstrating that our system exhibits a marked improvement in terms of vulnerability detection, with a discovery rate more than seven times (7x) higher than the state-of-the-art approaches. These results demonstrate that command injection vulnerabilities in the GitHub Actions ecosystem are not only pervasive but also require taint analysis to be detected

​

BIOMETRIC ARTIFICIAL INTELLIGENCE

Mykah Stone

A discussion on the intersectionality of biometric security and AI

​

FUTURE OF AI: ARTIFICIAL INTELLIGENCE IN THE WORKFORCE
Matthew Sepson

The rapid advancement of artificial intelligence sparks both optimism and concern regarding its influence on the future workforce. Our research delves into the impact of AI on employment, examining the positive and negative implications of AI and the workforce. In our session, we will focus on the ethical considerations of AI algorithms, data security, and vulnerabilities in AI systems, and the requirements in a comprehensive ethical framework to regulate responsible AI utilization as it grows rapidly in the workforce. We will also explore how job displacement stands as a pivotal concern, as AI's automation capabilities threaten certain roles, particularly in manufacturing, customer service, transportation, and repetitive skill sets, and how the integration of AI also fosters new job opportunities by demanding expertise in data science, machine learning, and AI engineering

​

DATA INTELLIGENCE AS A SERVICE: A CLOUD PLATFORM FOR AI–POWERED CYBERSECURITY RESEARCH AND PRACTICE
Cole Hearld

This research project is a collaborative work among researchers and students at East Carolina university and McMaster University in Canada to establish Data Intelligence as a Service (DIaaS) cloud platform for AI-driven cybersecurity research, development, and hands-on practice to train security analyst from different industries. The DIaaS utilizes Google Cloud Platform (GCP) to provide customized virtual workstations for the security analysts based on their needs to practice and promote their knowledge of cloud and AI-based cybersecurity solutions on synthetic or proprietary dataset. The platform will generate realistic behavior-pattern driven datasets representing normal and RED-team activities that will be augmented significantly via our proprietary generative adversarial network solution (GAN). The locally developed AI solutions will be packaged into docker images which will be offered through the virtual workstations. The vulnerability analysis tools will leverage explainable artificial intelligence (XAI), inverse reinforcement learning, Transformers, knowledge-graph visualization techniques, as well as cloud-based process automation using CI/CD and Kubernetes container orchestration. Such solutions will allow the security analysts to develop their own ML models, migrate to cloud, and easily interact with the ICS using dialogue-based chatbots.

​

A LARGE-SCALE STUDY OF UPDATE METRICS OF OSS PACKAGES AND THEIR SECURITY IMPLICATIONS DESCRIPTION

Imranur Rahman

In this session, we will go through different update metrics of OSS packages across different ecosystem and explore their security implications. By "update metrics" we mean metrics that consider the "outdatedness" of dependencies in a package. In this study, we have proposed two "update metrics" and explore how the packages are doing in different ecosystem, e.g., NPM, PyPI, and CARGO. One of our findings is ecosystem size matters when we analyze these update metrics.

​

EXPLOITING DEVOPS FLOW GAPS: UNLEASHING SOFTWARE CHAIN CHAOS

Elif Sahin

Insufficient flow control mechanisms highlight a situation where a person with unauthorized access to the CI/CD process system (like SCM, CI, Artifact repository, etc.) can easily insert malicious code or items into the development pipeline.​

This occurs because there aren't enough security measures in place to demand extra checks before changes are applied. In this challenge, Checkov is used to ensure that an S3 bucket created by Terraform code remains private, preventing accidental exposure to the public. ​

However, this protective feature can be bypassed by manipulating the Checkov configuration using a technique known as the Malicious Code Analysis (MCA) vector.​

​

EXPLOITING DEV OPS FLOW GAPS: GIT YOUR INJECTIONS
Elihah Tripp

RezLab at UNCW has launched an initiative to raise awareness and preparedness for Software Supply Chain attacks. With an increase in frequency by 742% between 2019 and 2022, it is a vital area of study for professional computing. To prepare for these attacks, students at RezLab have put together posters that detail examples of SSC attacks and their real life counterparts which have extensive impacts.

bottom of page