javascript: click on an isometric plane and get the normal coordinates

I have photos distributed as cells. When I click, I get the corresponding row and column.
console.log ("Col:" + X + "Row:" + Y);

When applying an isometric view conversion like this:

ctx.translate (0, 300);
ctx.scale (1, 0.5);
ctx.rotate (-45 * Math.PI / 180);

I do not know what mathematical formula is applied to obtain the coordinates correctly.


GPS coordinates are not updated

Hi, I'm doing a project with a mobile GPS locator, I'm using a NEO 6m module with my raspberry and I'm using the following code to get the location

import serial
import, time
from decimal import *

delay = 1
# GPIO.setmode (GPIO.BOARD)
def find (str, ch):
    for i, ltr in enumerate (str):
        if ltr == ch:
            yield i

port = serial.Serial ("/ dev / ttyAMA0", baudrate = 9600, timeout = 1)
cd = 1
    while cd <= 50:
        ck = 0
        fd = ''
        while ck <= 50:
            rcv =
        if '$GNRMC' in fd:
            if dif > fifty:
                data = fd[ps:(ps+50)]
                p = list (find (data, ","))
                lat = data[(p[2]+1): p[3]]lon = data[(p[4]+1): p[5]]s1 = lat[2:len(lat)]
                s1 = Decimal (s1)
                s1 = s1 / 60
                s11 = int (lat[0:2])
                s1 = s11 + s1
                s2 = lon[3:len(lon)]
                s2 = Decimal (s2)
                s2 = s2 / 60
                s22 = int (lon[0:3])
                s2 = s22 + s2
                print ("Latitude:" + str (s1))
                print ("Longitude:" + str (s2))
    cd = cd + 1
    print (cd)
except KeyboardInterrupt:
    print ("Thank You")

The problem is that when moving my mobile the location does not change: it stays with the initial location and I would like to know if that is due to the programming or the data I receive from the module

Algorithms – How to check if a list of XY coordinates meets the safety distance between them?

My record is not CS, so I regret the use of an inappropriate term. But basically I want to verify if a point in the XY plane is "too close" to other points, and do it with each point. In other words, if I draw a circle with radius R at each point, any circle will cross other circles in the plane.

I want to code this in Python, if that matters.

How can I develop web applications with functions for maps, coordinates, data of a free address?


you do not do?

Advertise practically anything here, with CPM ads, CPM email ads and CPC contextual links. If you wish, you can go to relevant areas of the site and display ads based on the user's geographic location.

Start at only $ 1 per CPM or $ 0.10 per CPC.

c ++ – Problem when updating the coordinates of a QGraphicsItem when moving it with the mouse?

I hope they are fine.
My problem is this: I have a class called Item that inherits from QGraphicsItem, so it's up to me to implement its virtual methods paint () Y boundingRect (), to which I establish the values ​​through the variables of said objects.
Then attach the header and source code:



class Scene;

class Item: public QGraphicsItem {
    QString name;
    QPixmap image;
    bool selection;
    QPointF point;
    QSizeF tamanio;
    Item (QPixmap image, QGraphicsItem * parent = nullptr);
    void setName (QString);
    QString getName ();
    void setImage (QPixmap);
    QPixmap getImagen ();
    void setPoint (QPointF point);
    QPointF getPoint ();
    void setTamanio (QSizeF tamanio);
    QSizeF getTamanio ();
    void setSelection (bool selection);
    bool getSelection ();
    QRectF boundingRect () const override;
    void paint (QPainter * painter, const QStyleOptionGraphicsItem
        * option, QWidget * widget) override;


#include "item.h"
#include "scene.h"

Item :: Item (QPixmap image, QGraphicsItem * parent): QGraphicsItem (parent)
    this-> image = image;

void Item :: setName (QString name) {
    this-> name = name;

QString Item :: getName () {
    return name;

void Item :: setImagen (QPixmap image) {
    this-> image = image;

QPixmap Item :: getImagen () {
    return image;

void Item :: setPoint (QPointF point) {
    this-> point = point;

QPointF Item :: getPoint () {
    return point;

void Item :: setTamanio (QSizeF tamanio) {
    this-> tamanio = tamanio;

QSizeF Item :: getTamanio () {
    return tamanio;

void Item :: setSeleccion (bool seleccion) {
    this-> selection = selection;

bool Item :: getSeleccion () {
    return selection;

QRectF Item :: boundingRect () const {

    return QRectF (dot.x (), dot.y (), tamanio.width (),
    tamanio.height ());

void Item :: paint (QPainter * painter, const QStyleOptionGraphicsItem *,
    QWidget *) {
    painter-> drawPixmap (punto.x (), punto.y (), tamanio.width (),
    tamanio.height (), image);

Then I have another class called scene that inherits from QGraphicsScene, in its constructor I add the items with their respective positions, size and images through the methods of the Items. I also enable the flag QGraphicsItem :: ItemIsMovable to be able to drag them with the mouse through the scene and this is where my problem lies.
The object I can drag it well, but the coordinates of the point are not updated, this causes me problems when trying to know its position.

Note that the position is only established in the methods paint ()
Y boundingRect (), but only when creating the scene that is when all the items are initialized and added.

Attach the header and source code of the scene class.


#include "item.h"

class Item;

class Scene: public QGraphicsScene {
    Scene (QObject * parent = nullptr);


#include "scene.h"

#define XPOS 0.0 // constants for the dimensions of the scene
#define YPOS 0.0
#define WIDTH 663.0
#define HEIGHT 410.0

Scene :: Scene (QObject * parent): QGraphicsScene (XPOS, YPOS, WIDTH,
    HEIGHT, father) {
    Item * c = new Item (QPixmap (": / img / targetOff.png"));
    c-> setPoint (QPointF (0.0,0.0));
    c-> setTamanio (QSizeF (;
    c-> setName ("A");
    c-> setSeleccion (false);

    c-> setFlag (QGraphicsItem :: ItemIsMovable, true); // Mobility
    addItem (c);

    Item * c2 = new Item (QPixmap (": / img / targetOff.png"));
    c2-> setPoint (QPointF (300.0.200.0));
    c2-> setTamanio (QSizeF (;
    c2-> setName ("Item B");
    c2-> setSeleccion (false);
    c2-> setAcceptHoverEvents (true);
    c2-> setFlag (QGraphicsItem :: ItemIsMovable, true); // mobility
    addItem (c2);

What I deduce is that these methods, although invoked automatically when the object is created, do not update it to subsequent changes; in my case the coordinates of the item and I would have to do it manually using the methods setters but I also can not find the new coordinates when moving it.

Thanks for reading I hope your answers.

trace – Generate frame coordinates for the TikZ draw command

Is there any intelligent way to generate a TikZ plot sequence of drawing coordinates in Mathematica? The following commands work quite well

Y[x_] : = Log[2, (2 x - 200)/(x - 200)]
Table[{i, y[i]}, {i, 201, 2000, 10}]// N

but it results in the output

{{201., 7.65821}, {211., 4.33498}, ...

I can process it later in a text editor, but it would be better if I could exit in Mathematica as

(201., 7.65821) -
(211., 4.33498) -
(221., 3.52655) -
(231., 3.07923) -
(241., 2.78200) -

or something similar, to be easily used with the TikZ drawing command. I could go through the list and the & # 39; format & # 39 ;;

Clear[i, y];
Y[x_] : = Log[2, (2 x - 200)/(x - 200)];
coordinates = table[{i, y[i]}, {i, 201, 2000, 10}]// N;
tikzlist = {};
by[i = 1, i <= Length[coordinates], i ++,
x = coordinates[[i]][[1]];
y = coordinates[[i]][[2]];
tikzlist =
tikzlist <> "(" <> ToString[x] <> "," <> ToString[y] <> ") -";
]tikzlist = StringTake[tikzlist, StringLength[tikzlist] - two]<> ";"

but perhaps there is some command of a shorter line that does the job better and more efficiently. AUNT.

Cartesian coordinates and ellipsoidal coordinates of python.

1. This function converts the 3D Cartesian coordinates into ellipsoidal coordinates using the GRS80 parameters
2. This function converts the ellipsoidal coordinates into Cartesian coordinates using the GRS80 parameters
3. This function converts decimal degrees to seconds of degrees
How can I write these three functions in a Python code?

import mathematics

GRS80 = 6378137, 298.257222100882711

wgs84 = 6378137, 298.257223563

def xyz2ell (x, y, z):

φ = math.radians (latitude)

λ = mathematics.radians (length)

e2 = a ** 2-b ** 2 / a ** 2

r = (N + ellHeight) * cos (φ)

N = a / math.sqrt (1-e2 * math.sin (φ))

x = float ((N + ellHeight) * math.cos (φ) * math.cos (λ))

y = float ((N + ellHeight) * math.cos (φ) * math.sin (λ))

z = float (((1-e2) * N + ellHeight) * math.sin (φ))

back latitude, longitude, ellHeight
`` `

multivariable calculation – Integral triple: Changing coordinates from polar to spherical $ (dx wedge dy wedge dz) rightarrow (d rho wedge d phi wedge d theta) $

Leave $ x = rho sin ( phi) cos ( theta), y = rho sin ( phi) sin ( theta), z = rho cos ( phi) $. So $$ dx = d ( rho sin ( phi) cos ( theta)) = sin ( phi) cos ( theta) d rho- rho sin ( phi) sin (ta) d theta + rho cos ( phi) cos ( theta) d phi $$
$$ dy = d ( rho sin ( phi) sin ( theta)) = sin ( phi) sin ( theta) d rho + rho sin ( phi) cos ( theta) d theta + rho cos ( phi) without ( theta) d phi $$
$$ dz = d ( rho cos ( phi)) = cos ( phi) d rho- rho sin ( phi) d phi $$
Then I got married $ dx wedge dy $ first and I got
$$ dx wedge dy = rho without ^ 2 ( phi) d rho d theta- rho ^ 2sin ( phi) cos ( theta) d theta d phi $$
So then $ dx wedge dy wedge dz $ It should be:
$$ dx wedge dy wedge dz = (- rho ^ 2sin ^ 3 ( phi) – rho ^ 2sin ( phi) cos ^ 2 ( phi)) d rho d theta d phi = – rho ^ 2sin ( phi) d rho d theta d phi $$
But I know it's supposed to be positive. I am sure I have made some minor mistakes, but I have been observing this for some time and I am beginning to wonder if conceptually I am missing something bigger that is happening.

The x, y coordinates of the mouse click can not be translated to the color of the pixel. Mounting

I have created a very short and simple code with 2 processes that are supposed to get the coordinates with the mouse click and then print the color in this pixel of the exact coordinates. It seems I can not find where the problem is … it prints 0 no matter where you click on the screen … help me as soon as possible!

; ------------------------------------------------- - -------

proc printpixel
mov cx, 8d; coordinates x
mov dx, 8d; coordinates and
moved to, 4; color that is supposed to be printed at the end
; Print pixel:
mov bh, 0h
mov ah, 0ch
int 10h

endp printpixel
; ------------------------------------------------- - -------
PROC LeftButtonClick
mov ax, 0000h; restart the mouse
int 33h
ax cmp, 0FFFFh
jne nomouse
mov ax, 0001h; show mouse
int 33h
MouseLP :; up to cx = x of click and dx = y
mov ax, 0003h; get mouse position and button status
int 33h
cmp bx, 1; Check the left mouse click
jne MouseLP; Loop until clicking with the mouse
call findcolor

ENDP LeftButtonClick
; ------------------------------------------------- - -------
proc findcolor
xor bh, bh
mov ah, 0dh
int 10h; al = number of color pixels
mov dl, al
add dl, & # 39; 0 & # 39;
mov ah, 02h
int 21h
endp findcolor

mov ax, @data
mov ds, ax
; Graphic mode
ax mov, 13h
int 10h

call printpixel
call LeftButtonClick

; Wait for the key to be pressed
mov ah, 00h
int 16h

; Return to text mode
mov ah, 0
moved to, 2
int 10h

ax mov, 4c00h
int 21h
Beginning of the end

It is assumed that the print is 4 (the number of color from the beginning) but still printing 0 ... why? What is wrong with the code?

How to graph using cylindrical coordinates?

I need to graph the curves x ^ 2 + and ^ 2 = 4, and x ^ 2 + and ^ 2 = 25 in the cylindrical coordinate system, but I do not know how. I substituted the values ​​of x and y for their cylindrical counterparts, but I do not know how to find the coordinates of the graph and how to graph it in general.