## Circular dependence – Is there a good design to eliminate?

I was writing a code and I came across a scenario where I was thinking about making a circular dependent class, something I had not done before. Then I looked for the circular dependencies and if they were inadmissible, and I found that it is feasible but not desirable. But I have some dilemma about how to implement it without circular dependencies, so I thought you should see if there are other suggestions available.

Imagine that you are creating an index for a number of files, and those files have a number of attributes, including an attribute that records what files the particular file refers to.

Trying to configure some classes that imitate this structure, I have written several classes.

1. `subclass` It includes the definition of attributes of a file, let's call it.
set of attributes A

2. `subclass B` It includes classification attributes of a file, let's call it.
this set of attributes B

3. `fileObject` is an object that represents a file, and has one
`subclass` object and one `subclass B` object.

4. `fileSet` is an object that represents a particular set of files, and
it's essentially a collection of `fileObject`s

How I was creating `subclass B`, I realized that the information related to the reference files within `subclass B` it's really fair `fileSet` with limited `subclass` information. Is it prudent then simply circular reference inside `subclass B` a `fileSet` object? Or if that is a terrible idea, how should one store the information? Technically, we can create another collection class object under `fileObject`that will store a lot of `subclass` objects, but I'm really not in favor of that since I would need to duplicate certain functions of `fileSet` within that new class definition (for example, functions that check those objects, combine those objects, etc.).

And if I do it where I have a `object_M` which includes a collection of `subclass` and other `object_N` which includes a collection of `subclass B` (who has a `object_M` included), and a higher level `fileSet` which includes one `object_M` Y `object_N`, that will solve the problem, but suddenly we have a new problem of needing some form of linking objects within `object_M` Y `object_N` Together in some way, another complexity in itself.

With the given scenario, should I just go with the circular dependency? Or is there a better way to do it altogether?

## Is the circular reference with the properties of the Typescript matrix a bad design?

I understand that having circular dependence can be a bad design. However, I have a question regarding a certain class structure.

As an example:

ocean.ts

``````import {Boat} from & # 39; ./ boat & # 39 ;;

ocean export class
Boats: Array = [];

getWaterLevel () {
return 5;
}

createBarcos () {
for (be i = 0; i <10; i ++) {
ship const = new ship ();
boat.ocean = this;
boat.engineRunning = true;

this.boats.push (ship);
}
}
}
``````

boat.ts

``````import {Ocean} from & # 39; ./ ocean & # 39 ;;

Export class Ship
engineRunning: boolean;
ocean: ocean;

can move () {
return this.ocean.getWaterLevel ()> 5 && this.engineRunning;
}
}
``````

In Typescript this can not be done without a circular reference problem of imports. I have also read that people conclude that it is a sign of bad design. The only other solution I could see is to create a third layer that is something like OceanBoat and manage the two resources. Is this bad design or a bad limitation of Typescript?

## c ++ 11 – Single ring / circular buffer c ++ class V2

The Original Post (v1)

I am looking for comments on the updated version of the code published in the link above.

``````#pragma once

#include
#include
#include

namespace data structures {

model
class CircularBufferv2 {

// use char as a type to avoid the default initialization of _Ty.
align (alignof (_Ty)) char buffer[_Size * sizeof(_Ty)];
size_t tail;
bool isFull;

public:
constexpr CircularBufferv2 () noexcept:
buffer {0},
queue {0},
isFull {false} {
}

void push (const _Ty & item) noexcept {
assert (! isFull && "Trying to insert an element in the complete buffer!");

new (& buffer[head * sizeof(_Ty)]) _Ty (std :: move (item));

}

_Ty pop () noexcept {
assert (! is_empty () && "Attempting to open an element from an empty buffer");

auto location = reinterpret_cast<_Ty*>(&buffer[tail * sizeof(_Ty)]);
auto result = std :: move (* location);
std :: destroy_at (location);

tail = ++ tail% _Size;
isFull = false;

return result
}

_NODISCARD constexpr _Ty & peek () noexcept {
assert (! is_empty () && "Trying to look in an empty buffer!");

return * reinterpret_cast<_Ty*>(&buffer[tail * sizeof(_Ty)]);
}

_NODISCARD constexpr const _Ty & peek () const noexcept {
assert (! is_empty () && "Trying to look in an empty buffer!");

return * reinterpret_cast<_Ty*>(&buffer[tail * sizeof(_Ty)]);
}

_NODISCARD constexpr bool is_empty () const noexcept {
return! isFull && tail == head;
}

_NODISCARD constexpr size_t get_capacity () const noexcept {
return _Size;
}

_NODISCARD constexpr size_t get_size () const noexcept {
yes (it is complete)
return _Size;

return _Size + head - tail;
}

_NODISCARD _CONSTEXPR17 _Ty * data () noexcept {
return buffer
}

_NODISCARD _CONSTEXPR17 const _Ty * data () const noexcept {
return buffer
}
};
}
``````

I want to take advantage of all the new features (c ++ 17) and it's also compatible with older compilers (preferably all older compilers, but c ++ 11 is probably as old as what I'll actually compile). Any welcome suggestion. (I am trying to use this class as an example class to follow when building other classes).

In addition, the use of `_CONSTEXPR17` about him `data` functions I was wondering why use the macro vs just one `constexpr`? (I based the use of macros around the `std :: array` structure, its data function uses `_CONSTEXPR17` instead of just `constexpr`.)

## c – How to save samples before and after an event in a circular buffer?

Hello, I am processing a 17-hour dataset of audio .wav (16-bit PCM, 192khz), to simulate a real-time processing that will embark on an ESP32, Arduino DUE or a RASP, depending on the results.

How am I dealing with this now?

First I fatiei this file in samples of 1 minute, after creating a program in C that converts that file into a .CSV (skipping all the head of the .wav and taking only the date fields).

In the case of a quality management system,

With that generated .CSV file I execute it in a second program.
That opens this file, fills a circular buffer with 130ms (24900 values), when the buffer is completely filled, the code starts calculating the RMS (square root) in mobile window with 10ms overlap, the window size is 30ms . When I obtain a value greater than 1000, it is considered an event.

Below the figures of my goal:

Here is evidenced the Window with 50ms before and after to which I refer:

My question that I can not solve is:

How should I save these 50ms before and after the event, since the event can occur anywhere in the buffer and if the event lasts for more than one window?

Some information to facilitate understanding:

``````130ms = 24900 values ​​from my .csv file
50ms = 9600 values
30 ms = 5700 values
10ms = 1920 values
``````

I have already searched several sources, but most of the bibliographies of DSP and Data Structures treat these topics superficially, only exemplifying what a circular buffer is and not how to use it in a useful way.

Follow my code scheme, which seems to be with a wrong approach to the problem but I'm really without ideas of how to proceed, in the case I created a dataset from 1 to 100 to facilitate the debug:

``````                # include
# include
# include
# include

// Defines the 50ms window size
#define window_size 3 // 30ms
#define buffer_size 13 // 130ms = 50ms + 30ms + 50ms

int main ()
{
// Defining variables.
int buffer[buffer_size]= {0}; // create a buffer with 150ms of the buffer;
int write = 0;
int i = 0, j = 0;
int int = 0;
int int1 = 0;
int write1 = 0;
int counter_elements = 0;
int number_lines = 0;
int save_line = 0;
char c;
char str[1024]; // vector to store the characters read as string.
int start = 0, end = 0;
// RMS
int rms = 0;
int pre_sampling[5] = {0};

// Defines variables related to the reading of the file and its manipulations.
FILE * fp;
FILE * LOG;
FILE * log_rms_final;

// Open the file and check that it is NULL.
if ((fp = fopen ("generator.txt", "r")) == NULL)
{// Define the name of the csv to open
printf ("Error, can not open file.  n");
of exit (1);
}
// Registers the rms values
LOG = fopen ("RMSValues.csv", "a");
// Create the file that records 50ms before and after with step.
log_rms_final = fopen ("Log_RMS.csv", "a");
// we must read the file and process:
int line = 0;
while (! feof (fp))
{
fgets (str, 1024, fp); // Read the file of 1024 cacacteres and store in the vector str.
// buffer[write] = (atoi (str) & 0xff00) / 256; // Add the converted character in the buffer at the head position.
buffer[write] = atoi (str);
write = (write + 1)% buffer_size; // it becomes "round".
counter_elements ++; // Add one in the number of elements

c = fgetc (fp);
(c == & # 39;  n & # 39;)
{
++ lines;
}
printf ("% d  n", lines);
// If the buffer is full.
if (counter_elements == buffer_size)
{
// Step window
(i = 0; i) < window_size; i++)
{
}

fprintf(LOG, "n %d", rms); // Grava no arquivo

if(rms > 1000)
{
printf ("rms:% d  n", rms);

// Save the 50ms before the step and the window.
write1 = write;
(j = 0; j) < 5; j++)
{
//Torna o preenchimento circular do vetor de pré amostragem.
write1 = (write1 + (buffer_size - 1)) % buffer_size;
//pré amostragem recebe os valores do buffer referente aos 50 ms anteriors.
pre_amostragem[j] = buffer[write1];
}

fprintf(log_rms_final,"%s","n");
// Grava o vetor de 50ms no arquivo de log no sentido correto.
for(j = 4; j >= 0; j--)
{
fprintf (log_rms_final, "% d - pre  n", pre_sampling[j]);
}

fprintf (log_rms_final, "% s", " n");
/ *
(j = 0, j <window_size, j ++)
{

fprintf (log_rms_final, "% d - window  n", buffer[read1]);
}
* /
fprintf (log_rms_final, "% s", " n");

// Save the 50ms after the step.

/ *
fseek (log_rms_final, save_line - 3, save_line);

(j = 0, j <5, j ++) {}

fgets (str, 1024, fp);
fprintf (log_rms_final, "% d - post  n", atoi (str));

}
* /
}

rms = 0;

// Make the queue go circular, jump 160 in 160.

// my meter should consume more 50ms
counter_elements = counter_elements - 2;

}
rms = 0;

}

fclose (fp);
fclose (LOG);
fclose (log_rms_final);
return 0;
}
``````

Any suggestions would be welcome. Thank you.

## co.combinatorics – Circular permutations (bracelets) with similar things (reflections are equivalent) using the enumeration of polia

The circular permutations of N objects of n1 are identical of one type, n2 are identical of another type and so on, so that n1 + n2 + n3 + ….. = N?
There is a similar question but it does not address the case in which the reflections are under the same equivalent class.$$frac {1} {N} sum_ {d | N} phi (d) p_d ^ {N / d}$$ This is when the reflections are not the same. How does the equation change under this new restriction?

Note: I could not comment on that question because of my low reputation, so I asked this question.

## Filters: why do circular polarizers not reduce light transmission in one stop?

The extent to which a polarizing filter reduces the light that passes through it is measured by the amount of light that allows it to pass to be attenuated. That is, light already polarized in one direction is directed to a polarizer rotated to allow polarized light in that direction to pass. The measured difference between the brightness of the light before and after it passes through the polarizer is the amount of transmission loss.

When using a polarizer with light that is polarized in more than one direction, the amount of light that will be allowed to pass and the amount that will not be variable depends on the amount of total light that is polarized in the direction in which it is allowed. filter. Pass and how much is not.

Roger Cicala at lensrentals.com did a study of circular polarizing filters a while ago and wrote a blog post on this topic: My not nearly complete, but rather entertaining, circular polarizer filter article

He compared six different CPLs in a diameter of 77 mm with a price of \$ 102 to \$ 200 purchased from an important and recognized online seller (listed in alphabetical order).

• \$ 102 – XS-Pro MRC-Nano High Transmission Circular B / W Circular Polarizer
• \$ 200 – Heliopan Circular Polarizer
• \$ 140 – Marumi EXUS Circular Polarizer Filter (EXUS is an acronym for a form of high transmission)
• \$ 150 – Sigma water-repellent circular polarizer filter
• \$ 103 – Tiffen Ultra Pol circular polarizing filter
• \$ 180 – Zeiss T * Circular polarization filter

He found that all of them were at least 99.9% efficient in polarizing light. I could not say that none were more efficient because 99.9% was the limit of their measurement configuration.

He found that all of them were flat enough not to affect the IQ more than any of the others. In his words, "They all passed with great success."

Where they differed was in how much light they let pass when do not Polarizing light In other words, he already shone polarized light through them with the filter turned to allow the amount the filter could allow. A 50% reduction would be exactly one stop. Here are the results from least to most transmissive:

• 55% – Tiffen (\$ 103)
• 58% – Heliopan (\$ 200)
• 66% – Zeiss (\$ 180)
• 68% – Sigma (\$ 150)
• 88% – B / N (\$ 102) (HT)
• 91% – Marumi (\$ 140) (HT)

Note that the transmissive measurement is performed with light that is already polarized in the same direction as the filter allows. Any loss of transmission due to polarized light that is not allowed through will be Additionally A transmission loss measured in this test.

Roger noted that some shooters could actually want the ND effect of reduced transmission when using a polarizer, which is often used in bright sunlight. So it's not always necessarily true that more transmissive translate to best in terms of CPLs.

In terms of spectral response., the two high-transmission CPLs (B & W and Marumi) had almost identical graphics between 430-700nm, with a flat line of around 500-700nm and a drop in the blue end of things with a significant limitation of the wavelengths UV

The rest had curves similar to each other but different from the two high transmission filters. There was no UV cut or peel off in the blue sections, there was a slight increase through the green wavelengths, then a very modest decrease from green to red before a slight increase in infrared.

None of the types had individual differences between the colors when it was oriented to block the greater amount of light and when it was turned to block the least.

Roger's first conclusion:

If you are buying a circular polarizing filter because you want some circular polarization, it does not seem to matter much which one you choose; they all polarize as gang members. So today I saved you some money.

Then he went on to say:

The second point, one that I have been told before doing all these tests, is to set the white balance after placing the CP filter, not before. Because the CP filters will have a fused color. Or just shoot raw and repair it later, which is what we do most anyway.

The last important conclusion he drew was what he called "the painful one":

I did not want to try filters; I really did not do it. But people wanted me to do it. So I chewed up the budget of my test equipment to buy laser transmission material and an optical spectrometer, spent a few weeks doing everything calibrated and setting standards, and then a couple of days testing these CP filters. I did this in clear violation of Roger's Third Law: no good action goes unpunished.

Once I finished, I told Aaron that I had just documented that the CP filters had different percentages of light transmission and different colors. And those high-transmission filters had an aspect, and it was different from the normal CP filters, which were all really similar. Because I was proud that my investment in time and money would have been worth it.

Aaron took off the filters, put them on a piece of paper, took this picture with his cell phone and said: "Yes, you're right".

Later, Roger followed up with another publication in which he tested a couple of cheaper CPLS:

• \$ 35 – Tiffen Circular Polarizer 77mm
• \$ 45 – Hoya 77mm HRT UV Circular Polarizer (ostensibly a high transmission filter)

And measured the following:

• 38% – Tiffen CP with a spectrum very similar to the four non-HT filters above.
• 53% – Hoya HRT with a spectrum very similar to the two previous HT filters.

He noted that the lower transmission seemed to be related to the lack of antireflection coatings on the cheaper filters. They were also as flat as their test could measure, as were the first six filters.

… There is not much doubt that, in terms of polarizing light, cheap CP filters do it very well. Also, as you would expect from uncoated or partially coated filters, they reflect A LOT more light. This is a major problem in a transparent or UV protection filter. Honestly, I'm not sure if it's a big problem for a polarizing filter.

## Algorithms – Circular list length doubly linked with a single pointer

I have a pointer to a circular list doubly linked with random values ​​of 0 or 1.
Example: `[{prev,0,next},{prev,0,next},{prev,1,next},{prev,0,next}...]`
I need to find the length of the list without creating other pointers.

I thought about `Josephus Circle` Algorithm but without success because I do not want to change the data (null) of my list.

My main problem is that I do not know if I reach the end of the list.

## Topology gt.geometric – Circular packages and surface packages that do not support highly irreducible Heegaard divisions

Leave $$S$$ be a closed connected orientable surface with $$g (S)> 0$$. Jennifer Schultens, in her article “ The Heegaard Divisions Classification for (compact orientable surface)$$times S ^ 1$$& # 39; & # 39 ;, proof that $$S times S ^ 1$$ It does not admit any separation of Heegaard strongly irreducible. My questions are:

1. Are there other closed circle (that is, non-trivial) packages that do not support highly irreducible Heegaard divisions? Also, is there a classification of closed-circle packages that do not allow strongly irreducible Heegaard divisions?
2. Are there other surface packages that do not support strongly irreducible Heegaard divisions?

## 8 – how to create a circular diagram with links to content

On a WordPress website, I saw a circular diagram of each of the elements linked to specific content on the site.
The diagram responds, which means that it resizes not only the circle, but also changes the size of the elements and leaves the description aside when viewed on a mobile screen.

Example: https://clearmindgraphics.com The source code tells me that it's part of the word "Bridge" theme.

How can I reproduce this in Drupal, or is there already a module that provides this?

## How can I get a high-precision circular drawing in graphics, by ten orders of magnitude?

I need to visualize some arrangements of fractal circles where the tangent circles can vary by ten orders of magnitude. In such cases, the circles are not drawn as tangents.

Example: the red and blue circles generated below are tangent at {0,0}, but with a large red radius they are not drawn, and the drawing error changes with PlotRange.

``````Handle[
Graphics[{Red, Disk[{-(10^logR), -(10^logR)}/Sqrt[2], 10 ^ logR], Blue, disco[{1, 1}/Sqrt[2], one]}
, Axes -> True
, ImageSize -> 200
, PlotRange -> {{-plotRange, plotRange}, {-plotRange, plotRange}}
, PlotRangeClipping -> True], {{logR, 6}, 0, 10, 1, Appearance -> "Tagged"}
, {plotRange, 1, 100, 1, Appearance -> "Tagged"}
]
``````

– Is there any way to fix this?

Note that it breaks even here, where all entries to Graphics are exact numbers
(I also tried SetPrecision on those entries, without success.)