Project al-Khwarizmi uses art and aesthetics as the common language to help communities of color conceptually understand what algorithms and artificial intelligent systems are as well as how and where these systems intersect their lives. Participants combine this new information with knowledge(s) they already possess, ex cultural, societal, street, to collaboratively design conceptual algorithms for combating and/or interrupting biased artificially intelligent systems using drawing, collage and stop motion video. Student will also eventually be introduced to creative code employed to make working chatbots. When intervention is not possible, methods of preempting for using biases built into algorithmic systems to support, rather than hinder, communities of color will be explored. The best way to address racism, sexism, and other biases within artificially intelligent systems is to ensure that people of color, and others who inherently understand the need for inclusion, equity, ethics, and multimodal testing, participate in the design, production and testing of such “smart” technologies help keep “innocent” biases and historically ingrained discrimination out of newly encoded systems. From deploying algorithms to decide which advertising follows you around the internet, to whether a self-driving cars are biased toward saving the life of pedestrians or saving the life of passengers in the car, algorithms are with us and will only gain momentum in the AI saturated future.