This is somewhat similar to this question but I could not find a solution there.
I have a project that I’ve worked on over the past 4 years. I started without any Python knowledge and learned as I went along, so there is a lot of legacy code and weird things that work but could break easily. I’m redoing this project with my current knowledge hoping to get a better system.
This is an embedded project that runs on a small linux board. I have my main “core” system (with its repo) and several subfolders:
- main folder: queue script, worker script
- p_connections (database, sftp and ssh libs)
- p_hardware (gpio access, adc, other busses)
- p_system (file system etc)
- p_worker (measurement scripts)
- p_logging (log handlers)
- p_sensors (hw access to the sensors and processing for their raw data)
Some of these modules are used elsewhere (in other projects) as well (such as p_sensors) and I plan to incorporate them as a GIT submodule (as that seems to be the correct way to do this?).
However, some of these submodules need access to other submodules.
- The credentials modules needs to be able to read the database (p_connections module) but also log (p_logging module).
- The user needs to be able (for testing, …) toggle gpio pins. There’s script for command line control of these pins, but the pin mappings (which pins match which outputs on different boards) is inside the settings table in a database (so accessible with a script in p_connections)
I could use something like
from .../p_connections import credentials
but that doesn’t seem very …elegant.
Another option I see would be to add all my submodule paths to the python path, but that seems overkill.
How is this handled “properly”?
Most online replies that I seem to find are in line with “you’re doing it wrong, that’s not how you should structure a project”, but that’s not really helping. I’ve read a couple of books/articles that suggest to really think about your project structure before starting, but I can’t seem to get it right…