Connect with us

Interviews

Ashley Bryant-Baker, Director of Data & Analytics at Fresh Eyes Digital – Interview Series

mm

Updated

 on

Ashley Bryant-Baker is Director of Data and Analytics at Fresh Eyes Digital, a consulting firm focused on the success of non-profit organizations. Before Fresh Eyes Digital, she ran her own consulting firm, B&B Data Solutions, where she helped brands build and leverage data solutions. She has worked in analytics for over a decade in industries ranging from consumer packaged goods, travel, logistics, healthcare and non-profits.

She has become a sought after speaker on the topics of Gender Bias in AI, Customer Segmentation Using AI, and diversity in the workplace. She has recently been invited to speak at various events, including SXSW, Data Minds Connect and Digital Summit DC. Ashley has attended The American Graduate School in Paris, Georgetown, LSU and Fort Hays University. She has a Masters degree in International Economics, Certificate in Data Science, a Bachelor of Business and a Bachelor in Art.

What initially attracted you to computer science and data science?

In college I studied art and was interested in working for a video game company when I graduated as a game designer. My plan was to do 3D computer models and design characters and objects that people would interact with in the game. I even worked as a video game quality assurance tester for EA Sports in college. Since at the time there was no computer arts concentration at my university I decided to minor in computer science to supplement my art degree. I initially did not like the computer science classes at all. There was a borderline hostility toward people who had no experience (like myself) from other students and even some professors. I stuck out my minor because my goal for my art capstone was to design and program a working video game. I used python and maya to build a 3D chess game with animated pieces that walked across the board and a very simple AI that could play against you. At the time I didn’t know anything about python and I assumed I would never use it again.

Fast forward to one of my first jobs out of college. I was working at a marketing firm as a jr. project manager. I worked with a team of artists, marketing specialists, production specialists and one analyst in the entire building who was managing analytics for around 15 clients on her own. She would ask me for help periodically to double check her math or create simple reports. When she had to go on medical leave for several weeks she asked my supervisor and me if I could take over for her while she was gone. When she came back I asked for a transfer to her department. Working with the data was so interesting to me. It was definitely an unexpected turn in my career, but I haven’t looked back since. I kept wanting to learn more so I took courses and applied for analytics jobs where I could learn from others. Then everything came full circle and I was working with python again, albeit in a totally different way than before.

All this to say. I initially got into data science purely by accident.

You’re currently Director of Data & Analytics at Fresh Eyes Digital, a company that works with nonprofits. Could you share what the company does and the work that you do there?

Fresh Eyes is a consulting firm that provides marketing and fundraising support to nonprofits. We work with clients to understand their donors, build digital campaigns around nonprofit goals and help nonprofits understand how their digital presence can be elevated in order to meet those goals. Fresh Eyes hired me because they wanted to build a more robust data offering. Initially I worked with them as a consultant where I helped them to design digital multivariate testing, understand results and automate analytics and dashboarding services. Now I am working with them to build out a suite of offerings for nonprofits. Some projects I am working on include predictive analysis around constituents’ and donors’ conversion and donation rates over time. Understanding the effects of the external factors, such as the political climate, economic changes and news cycles, as well as internal factors, such as marketing messaging strategies, nonprofit impact reports and even movement of leadership roles within an organization and how these can all affect propensity to convert. A lot of this information informs our forecasting analytics and dashboards and classification models to better understand donors and engagement.

Nonprofits are embracing the use of advanced statistical methods and are realizing that it helps their ability to see their mission through when they are able to better understand their impact and raise money in a more structured fashion.

One of your proudest achievements is being an advocate for diversity in STEM, could you share some of these highlights?

There are so many great organizations out there who are working toward diversity and equity in STEM: Black Girls Code, ByteBack here in DC, DataKind and most recently my sorority Zeta Phi Beta Inc. along with several other organizations have partnered with Google to train underrepresented groups in computer and technical training. I do my part by volunteering with these organizations, being a mentor to people newly entering the field, speaking at events (particularly tech events, where I am sometimes the only woman or person of color) and teaching at workshops in schools (especially majority minority schools, rural schools and alternative schools). Additionally, I’ve worked with several businesses to diversify their internship programs and entry-level graduate programs. A lot of this work I did out of habit. I grew up in a home and a community where volunteering was a part of everyday life. I’ve continued that through college and beyond with Zeta Phi Beta Inc. However I think I gravitated toward this area because I did not have the opportunity to learn about computers and coding until college and then when I got to college I remember the sense of negativity I felt pursuing my minor in computer science. I don’t want anyone, especially someone who is trying to learn and better themselves, to experience that. I don’t think I really realized the impact I was making until I was talking to a group of students at a recruiting event and a young black girl and her mom came up to me and said that I was the first technical black woman they had ever seen at any conference or recruiting event. That was when I knew I had to make this part of my regular routine.

I try to participate in  these kinds of programs regularly. In fact on March 16th I will be co-hosting a hackathon with an amazing data scientist and good friend of mine Swathi in conjunction with Girls in AI.

You’ve also worked on expanding tech education in rural and/or low-income neighborhoods. How significant of an issue is this?

Wow. There is not enough time to talk about what a huge issue this is! Coronavirus made it crystal clear that there are inequities that are systemic in our society. Unfortunately one of the biggest ones is education. I have a good friend who works at an alternative school on the outskirts of DC. Students there are often older, they have to hold down jobs in addition to going to high school, they don’t always have the tools at home to do distance learning like a laptop or desktop computer. These students had a teacher that advocated for them, working with the school to get a mobile option to make sure most of the students could access school on their cell phones. However that is not always the case in low income or alternative school environments. The rural situation is similarly difficult for students and teachers. High speed internet can be VERY expensive in rural areas and in some cases unavailable. Students sitting in McDonalds parking lots for internet access is unacceptable but an unfortunate necessity in some of these areas. I know teachers in rural Pennsylvania who themselves cannot get good internet to connect to their virtual classrooms.

Outside of coronavirus there is the issue of underfunding in rural and low income schools, a lack of technically trained teachers particularly in rural areas where attracting talent can be difficult and, of course, the general bias against students of color, immigrant students and even rural students who may look or sound different from the more widely accepted “American” culture. All of these scenarios add to the lack of access to STEM education and therefore students who are never exposed to those subjects and careers.

How big of an issue is gender and racial bias in AI?

It is something all business and organizations should think about. Unfortunately this is a tough one to solve because if AI is showing bias toward or against a particular group it most often means that in that particular area the company or organization already had a pattern of bias to begin with. AI is reliant on past patterns in order to predict future behavior and simply amplifies this behavior. However, it is difficult to get humans to recognize their own biases, we all have them and often we unconsciously act on them. There need to be systems in place to help mitigate these biases and keep teams accountable both on the technical and business side.

How can we ensure that the AI applications of today don’t amplify the biases humans have?

There are some steps organizations can take to create a standard of practice in data science and AI to help mitigate bias. I cannot express enough how this has to be a collaborative process between technical groups and business groups. The importance of context that is not always visible to technical teams is paramount.

It begins with recognizing and identifying potential sources of bias. It can happen in the data collection process, the feature selection for model building or it can happen completely outside of the data in the business practices. For example I was once asked by a leader at a company if their core audience was really older, wealthy men who more often lived in rural or suburban neighborhoods. I looked at the data and realized that their pipeline of data had an overrepresentation of this group. But I also noticed that the bulk of their customers came from the same media sources, conservative talk radio. I learned from a member of the marketing team that the company received low to no cost marketing on these platforms early in their launch and the bulk of their customers reflected this. The bias wasn’t in the data but in the lack of diversification in the communications strategy. However as a result, the lifetime value scoring model that the data team created scored older, wealthier, men from suburban and rural communities as the best performing customers, magnifying the communication strategy that the marketing team employed. This is something no technical team should be responsible for knowing, however they should be responsible for asking the right questions.

This leads me to the second step which is to set guidelines for looking for and then dealing with bias once it has been discovered. Once you’ve identified potential sources of bias, the organization should create a checklist of these sources to look for these issues and create a pathway for someone who finds concerning data or patterns to address them. This cannot be done in a vacuum. It is the responsibility of all teams to ensure that applications do not amplify biases. As in the example above the data team has no responsibility over the communication strategy. They can help pinpoint the findings and then work with other teams in the organization to address them. In this case the communications team worked with the data science team to test into other communications strategies that severed different demographic groups.

When biases show up in data models sometimes it may be in how a data team determines feature selection, what data is being included or excluded in the data stores or even the metric being predicted. In these cases it is important for the data team to understand that model accuracy is not always equivalent to model fairness. It may be true that including certain features in a data model increases the predictive accuracy of a model but the additional 0.5% of accuracy may come at a societal or business cost. Determining what fair means is not an easy task and requires the participation of multifaceted teams. One methodology called “counterfactual fairness” considers that a decision is fair towards an individual if it is the same in the actual world and in a counterfactual world where the individual belongs to a different demographic group. Additionally Microsoft and Google AI have published standards by which to account fairness in AI. I personally reference the EU guidelines on ethics in artificial intelligence, which I find to be pretty comprehensive for my industry. Once a standard of fairness is established the data team can determine if  the solution is to process the data beforehand, alter the system’s decisions afterwards or incorporate fairness definitions into the training process itself. The question of bias in data is a complex one that requires regular evaluation and the voices of a wide range of people. It is not simply a technical issue to be solved.

What are your views on government enacted AI and data ethics policies?

I think there have been movements in the right direction by creating a standard of procedure when it comes to AI and data ethics. Trump’s executive order on AI ethics creates a registry of models deployed within the government, sets up a timeline for creating policy guidance, encourages agencies to hire tech-focused teams and individuals and encourages transparency in AI use throughout the government in areas not involved with R&D or national security, which I think is immensely important. This kind of far reaching plan is an exciting development in the government, which has historically been slow to adapt technology. However, the policies have done little to create a culture of ethics, create compulsory or cohesive plans across agencies or to even define exactly what ethics or fairness means in these contexts. As the new administration comes in I would impress upon them to solidify these plans with a more structured and cohesive plan across all agencies as well as an evaluation procedure that carefully considers the human impact as so much of the work that our government does affects the daily lives of people both domestically and abroad.

Is there anything else that you would like to share about your work with Fresh Eyes Digital?

Data science can be used by nonprofits to increase the impact who are working to improve our world in so many ways. For these organizations collecting data is not typically their problem. They have lots of data to work with. Using this data in clear and actionable ways is difficult for these organizations who are often squeezed on resources and may not have an internal analytics team at the ready. The work we do in the data department at Fresh Eyes Digital helps those organizations to understand and deploy their data in the right ways making more well-informed, strategic decisions. I am glad to have the opportunity to work with these organizations in a way that helps make them more efficient and effective as they work to impact our world in positive ways.

Thank you for the detailed answers and I look forward to following your future ventures. Readers who wish to learn more should visit the Ashley Bryant-Baker website and/or  Fresh Eyes Digital

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital assets, digital securities and investing. He is a founding partner of unite.AI & a member of the Forbes Technology Council.