The rapid advancement of technology brings with it a host of ethical challenges. We asked industry experts to share their perspectives on the ethical considerations that come with tech development. Here are examples of ethical dilemmas they’ve encountered or thought about. Learn how tech leaders are addressing these challenges and transforming ethical concerns into opportunities for responsible innovation.
- Prioritize Human Values in Tech Design
- Balance AI Efficiency with Fairness
- Turn Ethical Concerns into Business Opportunities
- Empower Users with AI Assistive Tools
- Inject Diversity to Combat Echo Chambers
- Slow Down for Responsible Innovation
- Respect Privacy in GPS Tracking Solutions
- Navigate Data Collection and Customer Privacy
- Address Bias in AI Decision-Making Systems
- Prioritize User Fairness Over Legal Minimums
- Design for Student Autonomy in EdTech
- Build Ethical Controls into Scraping Infrastructure
- Bridge AI Perception Gap with Public
- Allocate Resources for AI Safety Research
- Redesign Persuasive Elements for User Trust
- Integrate Ethical Responsibility in AI Products
Prioritize Human Values in Tech Design
In technological development, ethical considerations are often overshadowed by innovation, but it’s crucial to address them upfront. One concept that has been on my mind is that of digital dark patterns. These are subtle design tricks used to manipulate or mislead users into making choices they might not otherwise make, such as unwittingly signing up for something.
A concrete framework to tackle this, which isn’t widely discussed, is the “Value-Sensitive Design” approach. This involves systematically focusing on human values throughout the design process. We prioritize transparency with our clients and ensure that their users are fully informed. By integrating feedback loops and iterative testing grounded in real human experiences, we avoid creating systems that compromise user autonomy.
Essentially, this involves aligning technological solutions with genuine user needs and ethical standards, ensuring that functionality supports rather than exploits users. This often involves conducting user workshops to understand various cultural and individual values, which serves as a foundation for ethical alignment in design choices.
Sinoun Chea
CEO and Founder, ShiftWeb
Balance AI Efficiency with Fairness
We view ethical considerations as a critical component of responsible tech development, particularly as ESG (Environmental, Social, and Governance) expectations continue to evolve. One ethical dilemma we’ve thought deeply about involves building AI-powered tools for the financial services industry. For example, when developing automation or decision-making algorithms, there’s always a risk of unintentionally reinforcing bias based on skewed training data.
To address this, we work closely with clients to implement rigorous testing and validation processes, and we advocate for explainable AI to ensure transparency in decision-making. We believe it’s not just about building effective tools, but about building tools that treat users fairly and align with broader societal values. Ethical tech is not a checkbox, but an ongoing responsibility.
Sergiy Fitsak
Managing Director, Fintech Expert, Softjourn
Turn Ethical Concerns into Business Opportunities
Without wanting to sound flippant, ethical problems can often be viewed as business opportunities. Having founded and run two digital analytics agencies, I have frequently had to consider privacy concerns, particularly in light of rapidly evolving laws and regulations. This has led me to create and host Privacy4Marketers, a hugely successful annual data privacy conference that continues to grow. We’ve also been developing apps to help privacy-conscious marketers and web analysts. If you’re concerned about the ethics of a situation, chances are there’s a market out there that shares your concerns, and they’re looking for someone to provide a solution.
Phil Pearce
Founder, CEO & Analytics Director, MeasureMinds
Empower Users with AI Assistive Tools
Ethical considerations are integrated into every aspect of tech development, particularly in healthcare. We’re always building systems that handle people’s most personal, sensitive information. So the ethical weight of every product decision is very real.
One ethical dilemma we’ve thought deeply about is how we utilize AI, particularly in the context of clinical note generation. On the surface, it’s a huge time-saver for practitioners. But we had to ask: what happens if a user blindly accepts an AI-generated note that isn’t clinically accurate? What’s our responsibility in making sure the tool empowers the clinician, rather than replacing their judgment?
We approached it by making sure our AI features are positioned as assistive, not autonomous. The clinician is always in control, and we make it clear that they’re responsible for reviewing and editing everything before it’s finalized. We also avoided overly aggressive automation in areas where nuance matters. It’s a line between convenience and clinical safety, and we’ve chosen to err on the side of trust and transparency.
Ethical design isn’t just about avoiding harm; it’s also about promoting well-being. It’s about building tools that respect the people using them, their expertise, and the lives they serve. If we ever lose sight of that, we’re not building the right kind of technology.
Jamie Frew
CEO, Carepatron
Inject Diversity to Combat Echo Chambers
Ethics in tech isn’t just a philosophical checkbox — it’s the quiet, constant responsibility that follows every line of code we write and every feature we launch. As developers and leaders, we shape behaviors, access, and outcomes for real people, whether we intend to or not. And that realization hit me hard during a project that involved AI-generated content for user personalization.
We were building an intelligent recommendation engine — nothing groundbreaking, just better suggestions, smarter interfaces. However, as we trained the model, we began to notice that it was reinforcing a narrow set of user behaviors. The longer the algorithm ran, the more it nudged people into a digital echo chamber. The data said, “They engage more this way,” but my gut said, “This isn’t good for them.”
The ethical dilemma was clear: Should we prioritize engagement at all costs, knowing the algorithm was feeding users what they liked, or should we consciously inject diversity — even if it meant a slight dip in the metrics? We chose the harder route. We introduced a manual override that injected a percentage of content outside the user’s known preferences. It wasn’t as sleek from a data science perspective, but it was right for the human experience.
This experience taught me that ethical tech requires you to zoom out and look beyond the KPI dashboard. If you’re not actively checking your work for bias, manipulation, or unintended consequences, you’re not building responsibly. Ethics isn’t the absence of bad intention — it’s the presence of deliberate guardrails.
Especially with generative AI and automation advancing rapidly, we need more uncomfortable conversations integrated into development cycles. It’s not about avoiding all risk. It’s about knowing the human cost if we don’t ask the tough questions early — and often.
John Mac
Serial Entrepreneur, UNIBATT
Slow Down for Responsible Innovation
I believe ethical considerations in technological development are not merely a checkbox — they are a responsibility. Every line of code we write and every feature we deploy has real-world consequences. Technology does not exist in a vacuum; it shapes behavior, amplifies voices, and sometimes unintentionally excludes or harms. Therefore, I approach development with a mindset of humility and accountability, asking not just, “Can we build this?” but, “Should we?”
One ethical dilemma that has remained with me involved a client project that utilized facial recognition to enhance user convenience. On the surface, it appeared to be a cool, cutting-edge feature. However, the more I investigated, the more concerned I became about the implications — including privacy risks, potential bias in the dataset, and a lack of clarity regarding data storage. The technology functioned, but I could not ignore that it was being implemented without user consent or a plan for transparency.
I ultimately decided to push back strongly, even at the risk of losing the project. This led to a redesign that gave users control over opting in, with clearer messaging and anonymized storage. That experience reinforced a core belief for me: just because a solution is innovative, it does not necessarily mean it is ethical. Sometimes, slowing down is the most responsible action a developer can take.
Sovic Chakrabarti
Director, Icy Tales
Respect Privacy in GPS Tracking Solutions
When developing any kind of technology, it is essential always to respect the people you aim to serve. When working with GPS tracking solutions, respecting the people that PAJ GPS serves means respecting their privacy and consent. Even though we design our products to keep track of those we love, we always prioritize the safety of our users’ data. If you are building an ethical tech product or service, you need to have transparent data policies, safe encryption, and clear opt-in consent to give your users full control over their information.
One ethical tech dilemma I’ve given considerable thought to is the use of geo-fencing alerts for employee vehicles. On one hand, such alerts have the power to increase safety, prevent theft, and provide extensive data to optimize fleet management and operations. On the other hand, constantly monitoring the movement of your employees risks breaking the trusting relationship you’ve built, since workers may feel like they are always being watched. A solution to this dilemma is to implement working hours on the trackers, which will be activated only when workers are performing their scheduled shifts.
Alex Sarellas
Managing Partner & CEO, PAJ GPS
Navigate Data Collection and Customer Privacy
I would say that a major ethical consideration with any kind of technology is the balance between customer privacy and the necessity of data collection and analytics. I believe this will become an even more significant consideration as AI technology continues to develop and become more thoroughly integrated into the tech sphere. Often, technology, software, and SaaS companies need customer data to build on their product and deliver more value. However, I think ethical considerations arise when we’re talking about things like disclosure and client permissions for data collection.
Soumya Mahapatra
CEO, Essenvia
Address Bias in AI Decision-Making Systems
One ethical challenge that often arises is bias in AI models, particularly when developing systems that make decisions affecting individuals, such as hiring tools or credit scoring.
This issue manifests during model training. If historical data contains bias (even subtly), the model can reinforce it. For example, an AI tool for shortlisting job candidates might begin favoring resumes that match past hires, inadvertently filtering out equally qualified individuals from different backgrounds.
This problem can be addressed by incorporating fairness checks into the pipeline, such as regularly auditing outputs, utilizing diverse training data, or reconsidering how inputs are weighted. The key point is that ethical risk often lurks in seemingly “neutral” data.
Balancing efficiency with fairness is where the real judgment call comes into play. It’s not just about what the technology can do — but what it should do. This framing tends to help ground these decisions.
Vipul Mehta
Co-Founder & CTO, WeblineGlobal
Prioritize User Fairness Over Legal Minimums
We believe tech companies must assume responsibility beyond legal compliance. Doing what’s “allowed” isn’t always doing what’s right. Ethical thinking asks how tools affect those with the least power. It’s about whose experience gets prioritized and whose gets ignored.
Once, a vendor suggested fingerprinting devices to prevent abuse of free trials. It was legal but lacked transparency and flexibility for shared users. We chose to absorb some risk and protect user fairness. Ethical compromise should never be the default option.
Marc Bishop
Director, Wytlabs
Design for Student Autonomy in EdTech
One ethical dilemma that particularly resonated with me occurred during the early days of developing ClassCalc. We had the idea of giving teachers a way to block all non-calculator apps during tests — it seemed like an excellent solution for preventing cheating. However, I began to consider the implications: what happens when you give a teacher’s phone that level of control over a student’s device? This concern was particularly relevant in public schools where students may already feel monitored or distrusted.
This situation forced us to think not just about what’s technically possible, but what’s responsible. We ultimately designed ClassCalc so that it only locks the calculator screen during active test sessions initiated by the teacher — avoiding full device lockdowns and hidden tracking. That balance between security and student autonomy was crucial. It’s a small example, but it demonstrated to me that just because you can build something, doesn’t mean you should — at least not without considering who it might impact and how it could potentially be misused. In the tech industry, ethics isn’t an afterthought — it’s the foundation of responsible development.
Daniel Haiem
CEO, App Makers LA
Build Ethical Controls into Scraping Infrastructure
As someone building AI-powered infrastructure, I constantly consider ethical considerations, particularly regarding data scraping and usage.
One dilemma I faced early on was how to design a system that could extract data at scale without enabling misuse or compromising the intent of a site’s design.
The technical challenge was building something powerful, but the ethical challenge was asking, just because we can scrape something, should we? For example, some websites don’t block scrapers outright, but it’s obvious from the structure and usage context that they weren’t meant to be harvested and republished. We built in controls such as rate limits, opt-out rules, and compliance toggles, which enable our users to be more deliberate about what they collect and how.
For me, ethical tech doesn’t mean removing capabilities; it means embracing them responsibly. It means designing with intention and giving users the tools to act responsibly.
And that mindset is what helps us build infrastructure that not only scales but also earns trust as it grows.
Cahyo Subroto
Founder, MrScraper
Bridge AI Perception Gap with Public
I believe that ethical considerations are often not adequately taken into account in tech development these days, especially when it comes to technologies like AI. If you were to stop any person on the street and ask them about their opinion of AI, they would likely have a few ethical questions or concerns about it.
Some that immediately come to mind include AI replacing jobs, AI making scam or fraud attempts more difficult for the average person to identify, and the impact of AI on the environment. I am constantly thinking about how the perception of AI differs vastly among the general public compared to major tech companies and investors. I believe developers and investors need to genuinely listen to the concerns the public raises, so that technology can be developed in a more ethical manner.
Edward Tian
CEO, GPTZero
Allocate Resources for AI Safety Research
One major ethical challenge is the allocation of resources for AI safety research. AI safety should not be a side project, and it shouldn’t be underfunded.
Companies typically funnel only 1-5% of their compute budgets toward AI safety research, when in reality, that figure should be 20-40%.
Here’s the actual dilemma: every dollar spent protecting future society means fewer immediate customer features today. That’s a difficult choice: immediate business needs versus long-term responsibility.
We addressed this by implementing “constrained alignment,” which provides built-in guardrails for autonomous systems.
But there’s an even trickier ethical dilemma: we’re democratizing powerful AI capabilities, putting advanced tech into thousands of hands.
Some will handle it responsibly; others won’t. Do we restrict access to prevent misuse or trust openness to drive better outcomes? We believe the Internet of agents needs to be free and open.
Alexander De Ridder
Co-Founder & CTO, SmythOS.com
Redesign Persuasive Elements for User Trust
One ethical dilemma I’ve given considerable thought to is how persuasive design can subtly influence the way people behave online. While working on an e-commerce website for a client, we added features such as countdown timers and “only a few left” messages to create a sense of urgency. It boosted sales, but I started to wonder if we were crossing a line.
Were we helping people make quick, informed decisions, or just pressuring them into buying something they might not really want? That question stuck with me.
We ended up redesigning those elements to be more honest and less pushy. Instead of creating pressure, we gave users the option to set reminders or save items for later.
That experience made me realize ethical design isn’t just about protecting data. It’s also about how our choices as developers can shape someone’s experience, emotions, and trust.
Nirmal Gyanwali
Website Designer, Nirmal Web Design Studio
Integrate Ethical Responsibility in AI Products
As the founder of an AI image editor website, I have come to realize that ethical responsibility is not just an afterthought — it needs to be integrated into the product from the very beginning. AI provides people with incredible creative power, but this also means we need to carefully consider how that power might be used, especially in ways we did not anticipate.
A pivotal moment that highlighted this occurred during our internal testing. One of our testers took a photo of two people standing together, used our AI to remove one of them, and then placed that person into a completely different background — a luxury villa. From a technical standpoint, it worked flawlessly. The lighting and shadows were well-matched, and the final image looked entirely realistic.
However, that was precisely the problem — it wasn’t real. The person had never actually been in that setting. Although the edit was not intended to be harmful, it made me realize just how easily our tool could be used to create misleading images. While it wasn’t a traditional deepfake, it still altered the context of the photo in a way that could be misinterpreted or exploited.
Nam Ton That
Founder, ai-imageeditor.com























