Family Promise Spokane: Contributing to ending the cycle of homelessness

Framework

Mudesir Suleyman
6 min readFeb 4, 2021

There’s no better feeling than putting a smile on someone’s face or helping someone in need. Being compassionate does more than increase feelings of well being. It also has the ability to transform feelings of insecurity, stress, and depression into pleasure and happiness. As a data science student of Lamda School, I completed a Data Science course, and at the end of the course we go through a lab course teamed up with other students with different backgrounds (frontend, backend, and data science) and work with stakeholders to build and develop their website. I had the opportunity to work with the organization Family Promise of Spokane and contribute a little piece of my knowledge to enhance their activity.

Family Promise Spokane is a nonprofit organization that helps prevent families from becoming homeless and help families who are already homeless by providing shelter options and the necessary guidance to get back on their feet and stay there. They have a vision of no child experiences homelessness in Spokane County and a mission of equipping families and communities to end the cycle of homelessness.

The stakeholders currently used a manual or paper system to check a guest into a shelter and keep track of their information in a filing cabinet. They had manually uploaded their information which was obviously very time-consuming and frustrating for both the employees and the families trying to check-in. Our team’s primary goal was to reduce the time for the overall intake and release and giving the highlights for decision-making employees by giving a prediction model and create data visualizations for the supervisors to view and make decisions on the best succession plan for a guest, and to allow supervisors to check the statuses of families on-site at the shelter during check-in times. By creating user experience and storing information in a database to replace the previous paper filing method by automating the system. The original plan was to implement using FastAPI however because of some more flexible features of the technology and more options we use streamlit and fastAPI.

In one month given time frame, I was worried that we would end up with several incomplete features from trying too hard to get every feature, rather than having some of the features user-ready. On other hand we use different technology to produce our API like FastAPI and Steamlit which are not going together, it has a problem to migrate the files from one another.

Technical challenges and Tasks:

When planning the task of developing the best predict model and creating data visualizations, our data science team of four data scientists divide the tasks into visualization, coding, model tunning, and API producing. We took the responsibility independently. The original plan of our team was to build the best model to predict with better accuracy, data visualization, coding, and API preparation. After the different meetings, we realized that the visualizations that we are going to create would refer to the stakeholder’s interest. Because of time constraints, we only focus on data visualization. As a team, we agreed that instead of creating a simple data visualization we have to create the best of our knowledge that could be beneficial for the end-users, the case manager, and the supervisor especially to show some important features which are influential in the outcome of the model which effects on the model prediction. As mentioned previously we used two frameworks to show our products which are streamlit and fastAPI. Streamlit is an open python library that makes it easy to create and share beautiful, web apps for machine learning and data science. Our team created and developed different graphs and charts for navigating different descriptive statistics which are helpful for the case managers and supervisor to give a clear suggestion on their decision. My assignment was to prepare the DSAPI which would show the personal information, show the shapely plot, and predict depending on previous team code.

This was the code used in the API to generate the Shapley plot visualization. From the database, it takes the information using a function that passes in the user information and converts the generated plotly into a JSON object to be displayed on the front end. The function takes the row of the user that is passed in and predicts their exit classification using the pickled Catboost model. TreeExplainer is then used on the model to compute its Shap values, removing all sampling-based estimation variance and no longer skewing results due to dependencies between features. Then store the feature names and values for the user in a sorted series. This information is then used as the data shown on the Plotly chart. Originally, we displayed the visualizations using Pyplot, but we converted it to Plotly so that we could more easily retrieve the JSON for the front end. On the web development side of things, they were able to build the intake forms and dashboards, along with many of the gritty details necessary for smooth and efficient use of the product.

The future of our product

As mentioned above, there was a lot more to do into the user story we create but with the constraint of time, we were not able to go far. My first thought was more to do the research on why the people are going to be homeless what are the most affected age, gender, and race. Making some statistical analysis to give some suggestions for the policymakers to prevent or stop the cycle of homelessness.

Making a long-term prediction is hard but for many products, you can estimate reasonably. Our team was able to set and developed the foundation for such an interesting project and I’m so proud to be a part of it. While our amazing and talented team did an excellent job, the journey for this project is just the beginning and has a ways to go before it’s ready. A few features that still need to be included for data interpretation and prediction:

  • Add the charts which will show some descriptive statistics
  • Add more graphs which will show the trend of some important selected features which will help the manager to make a decision.
  • Use different models to predict we used random forest if the feature team used a neural network model you may get the best accuracy.

It is possible to do so much for this project and it’s for such a great cause I look forward to seeing where future teams will lead this project. The best amazing things I have experienced while working on this project was being able to meet and work with an amazing team leader and an incredible team. The communication between us was the best I have ever experienced from a team with such a wide variety of roles. In order to check progress, the status of everyone's task, relay messages, and resources using slack, problems and some challenge solving was amazing we stayed in constant communication and collaboration. We worked as a team, frontend developers, backend developers, and data scientists worked so well together. Finally, I want to appreciate our TPL for his leadership and help in every challenge we face.

🙏

--

--