For instance, Dream11 has a specialised system so that users participating in every contest, including the paid ones, win in a fair, square and transparent manner. FENCE (Fairplay Ensuring Network Chain Entity) is Dream11’s in-house fraud detection system. It is powered by a graph database responsible for processing and maintaining all models and heuristics so that Fair Play Violations are detected timely and efficiently. By infusing our long-standing principles and ethical thinking, Dream11 has ensured consumer data privacy.
Founded in 2008, Dream Sports is the parent company of Dream11 and FanCode, and Dream Capital, and serves over 140 million users. The Mumbai-based sports content and commerce platform has invested heavily in emerging data science technologies such as AI, ML, and data science.
“AI, data, and machine learning play an indispensable role in our company across functions and processes, from start to finish. At Dream Sports, we leverage AI to make the experience better for our users, deliver top-notch customer support and regularly roll out new features that enhance fan engagement on our platform. We also use AI to craft personalised, contextual, and timely engagement campaigns to have an awesome user experience. For that, user-level behavioural insights are a must. For example, from a user perspective, participants benefit from machine learning-driven recommendations on which contests to join based on their past in-app behaviour and other factors,” said Abhishek Ravi, Chief Information Officer, Dream Sports.
In an exclusive interview with Analytics India Magazine, Abhishek Ravi spoke about how Dream Sports embeds ethics into Dream11, its AI-based fantasy sports platform. “Transparency is a critical part of Dream sports’ overall success. It is more than just a buzzword for us,” he said.
Excerpts
AIM: What are the AI frameworks/methods/techniques Dream Sports uses to optimise user experience?
Abhishek Ravi: Our flagship brand, Dream11, hosts 120 million+ users on its platform on match days and gives them the option to explore thousands of fantasy sports contests across a variety of sports. Our users can actively engage with real-life sporting events and showcase their sports knowledge. With millions of users logged on the platform simultaneously, it can become challenging to provide seamless, best possible experiences on the app every day.
To address this, we continuously experiment with multiple features on our app and understand user behaviour such as time taken to navigate through different screens to complete the journey, total time spent on a particular screen to see user navigation patterns from conversions/drop-offs stages and more.
We strive to provide the best user experience, and, in this journey, our hero is DataAware – a self-service funnel analytics tool used by the tech and product teams at Dream11.
AIM: What explains the growing conversations around AI ethics, responsibility, and fairness? Why is it important?
Abhishek Ravi: The evolution of AI is creating new opportunities to improve people’s lives worldwide, from business to healthcare to education. It also raises new questions about how to build fairness, interpretability, privacy, and security into these systems. It is essential that in such times, not just tech leaders but all companies deploying AI must make an effort to raise awareness.
Even the most sophisticated technology companies face problems, and AI and ML only go so far to teach machines what the human knows. Hence, thorough and careful consideration and regular risk assessment are essential to achieve a well-functioning model or product. Constant monitoring, regular user feedback loops, human judgement, and good governance practices are necessary to achieve the right ethical balance with AI use.
AIM: How does the Dream11 team ensure compliance to your AI governance guidelines?
Abhishek Ravi: We work in a cohesive pod structure, called Dream Teams which include 11 to 15 people across tech products, design, customer support team, etc., to ensure that every business problem we address has the best of both experiential and experimental minds, thus building the best product for users. Our Dream Teams involve a mix of fresh experimenting minds and experienced engineers. This helps us go above and beyond the mainstream ways to solve a problem and test whether the product is sustainable and if it will work for a larger audience to be aligned with our existing company practices. We also build many home-grown solutions that address key product-related challenges such as user experience analysis, scale management, mobile app automation, security, FairPlay, etc.
Our early adoption of Cloud also helps our teams quickly test out new features, scale tests in load/stress environments and drive maximum efficiency. We follow generally accepted industry standards to protect the personal information submitted to us during transmission and once we receive it for storage/disposal. When you enter sensitive information on our registration or order forms, we encrypt that information using secure socket layer technology (SSL). All information we gather is securely stored within databases controlled by us. The databases are stored on servers secured behind a firewall; server access is password-protected and strictly limited. We also conduct regular internal and external audits to ensure that the right governance policies are followed.
AIM: How do you mitigate biases in your AI algorithms?
Abhishek Ravi: Biased human judgments can affect AI systems in various ways. It could be in the data generated by existing systems or in the way that algorithms that are designed learn from these systems.
People tend to trust outputs from an AI/ML system once the initial moat is crossed. So, if human bias is missed in training, it could lead to operational, legal or ethical challenges that may be hard to recover from. There are frameworks to ensure that algorithms don’t pick up on biases influenced by human decision-making and make AI/ML systems fair. Each problem requires a different solution and a different set of data resources with constant re-validation. While there is no single model to follow that will avoid bias, there are parameters that can inform your team. It builds frameworks from some of the world’s leading research labs that can be incorporated in production releases. While excluding sensitive information from the model may seem like a workable solution, it still has vulnerabilities.
One must communicate with the data scientists to identify the best model for a given situation. Also, it’s better to have an independent AI committee that overlooks the applications and learnings of the AI algorithms and their implementations.
AIM: Do you have a due diligence process to make sure the data is collected ethically?
Abhishek Ravi: Transparency and ethics in data collection isn’t a matter of choice. Due diligence is a part of our operations and efforts to keep the market creative, competitive and thriving. Users are already sceptical about their data being collected, so companies must double down on their efforts to adhere to values and standards that factor in consent, laws and regulations and monitoring. When we use third parties to assist us in processing the personal information of users, we make sure that they comply with our privacy policy and other appropriate confidentiality and security measures that we take to prevent fraud or imminent harm, and ensure the security of our network.
AIM: How does Dream11 ensure consumer data privacy?
Abhishek Ravi: Dream11 takes the utmost care with data security. To ensure a seamless user experience for everyone who logs on to Dream11, which includes privacy to the user’s data, one of the key requirements for us was to understand user behaviour and preferences. This involves mapping and analysing a series of events that a user performs when they log into the app, a journey that starts with user engagement in a mobile app and ends in joining a contest. Multiple Dream Teams are involved in ensuring that the users get the best possible product experience from our end in the safest way possible. This also includes developing ML models to detect fraud or fake accounts on the platform.